linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE
@ 2025-07-03  1:48 Dong Yibo
  2025-07-03  1:48 ` [PATCH 01/15] net: rnpgbe: Add build support for rnpgbe Dong Yibo
                   ` (14 more replies)
  0 siblings, 15 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Hi maintainers,

This patch series introduces support for MUCSE N500/N210 1Gbps Ethernet
controllers. Only basic tx/rx is included, more features can be added in
the future.

The driver has been tested on the following platform:
   - Kernel version: 6.16.0-rc3
   - Intel Xeon Processor

Changelog:
v1: Initial submission

Patch list:
  0001: net: rnpgbe: Add build support for rnpgbe
  0002: net: rnpgbe: Add n500/n210 chip support
  0003: net: rnpgbe: Add basic mbx ops support
  0004: net: rnpgbe: Add get_capability mbx_fw ops support
  0005: net: rnpgbe: Add download firmware for n210 chip
  0006: net: rnpgbe: Add some functions for hw->ops
  0007: net: rnpgbe: Add get mac from hw
  0008: net: rnpgbe: Add irq support
  0009: net: rnpgbe: Add netdev register and init tx/rx memory
  0010: net: rnpgbe: Add netdev irq in open
  0011: net: rnpgbe: Add setup hw ring-vector, true up/down hw
  0012: net: rnpgbe: Add link up handler
  0013: net: rnpgbe: Add base tx functions
  0014: net: rnpgbe: Add base rx function
  0015: net: rnpgbe: Add ITR for rx

Best regards,
Dong Yibo


Dong Yibo (15):
  net: rnpgbe: Add build support for rnpgbe
  net: rnpgbe: Add n500/n210 chip support
  net: rnpgbe: Add basic mbx ops support
  net: rnpgbe: Add get_capability mbx_fw ops support
  net: rnpgbe: Add download firmware for n210 chip
  net: rnpgbe: Add some functions for hw->ops
  net: rnpgbe: Add get mac from hw
  net: rnpgbe: Add irq support
  net: rnpgbe: Add netdev register and init tx/rx memory
  net: rnpgbe: Add netdev irq in open
  net: rnpgbe: Add setup hw ring-vector, true up/down hw
  net: rnpgbe: Add link up handler
  net: rnpgbe: Add base tx functions
  net: rnpgbe: Add base rx function
  net: rnpgbe: Add ITR for rx

 .../device_drivers/ethernet/index.rst         |    1 +
 .../device_drivers/ethernet/mucse/rnpgbe.rst  |   21 +
 MAINTAINERS                                   |   14 +-
 drivers/net/ethernet/Kconfig                  |    1 +
 drivers/net/ethernet/Makefile                 |    1 +
 drivers/net/ethernet/mucse/Kconfig            |   35 +
 drivers/net/ethernet/mucse/Makefile           |    7 +
 drivers/net/ethernet/mucse/rnpgbe/Makefile    |   13 +
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  738 ++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   |  515 ++++
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h |   66 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c    | 2245 +++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h    |  143 ++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  936 +++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c    |  622 +++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h    |   49 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c |  650 +++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h |  651 +++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_sfc.c    |  236 ++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_sfc.h    |   30 +
 20 files changed, 6969 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/networking/device_drivers/ethernet/mucse/rnpgbe.rst
 create mode 100644 drivers/net/ethernet/mucse/Kconfig
 create mode 100644 drivers/net/ethernet/mucse/Makefile
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/Makefile
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 01/15] net: rnpgbe: Add build support for rnpgbe
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03 16:25   ` Andrew Lunn
  2025-07-03  1:48 ` [PATCH 02/15] net: rnpgbe: Add n500/n210 chip support Dong Yibo
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Add build options and doc for mucse.
Initialize pci device access for MUCSE devices.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 .../device_drivers/ethernet/index.rst         |   1 +
 .../device_drivers/ethernet/mucse/rnpgbe.rst  |  21 ++
 MAINTAINERS                                   |  14 +-
 drivers/net/ethernet/Kconfig                  |   1 +
 drivers/net/ethernet/Makefile                 |   1 +
 drivers/net/ethernet/mucse/Kconfig            |  35 +++
 drivers/net/ethernet/mucse/Makefile           |   7 +
 drivers/net/ethernet/mucse/rnpgbe/Makefile    |   9 +
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  35 +++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   | 221 ++++++++++++++++++
 10 files changed, 340 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/networking/device_drivers/ethernet/mucse/rnpgbe.rst
 create mode 100644 drivers/net/ethernet/mucse/Kconfig
 create mode 100644 drivers/net/ethernet/mucse/Makefile
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/Makefile
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c

diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst
index 139b4c75a191..f6db071210d9 100644
--- a/Documentation/networking/device_drivers/ethernet/index.rst
+++ b/Documentation/networking/device_drivers/ethernet/index.rst
@@ -59,6 +59,7 @@ Contents:
    ti/icssg_prueth
    wangxun/txgbe
    wangxun/ngbe
+   mucse/rnpgbe
 
 .. only::  subproject and html
 
diff --git a/Documentation/networking/device_drivers/ethernet/mucse/rnpgbe.rst b/Documentation/networking/device_drivers/ethernet/mucse/rnpgbe.rst
new file mode 100644
index 000000000000..5f0f338dc3d4
--- /dev/null
+++ b/Documentation/networking/device_drivers/ethernet/mucse/rnpgbe.rst
@@ -0,0 +1,21 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================================================
+Linux Base Driver for MUCSE(R) Gigabit PCI Express Adapters
+===========================================================
+
+MUCSE Gigabit Linux driver.
+Copyright (c) 2020 - 2025 MUCSE Co.,Ltd.
+
+Identifying Your Adapter
+========================
+The driver is compatible with devices based on the following:
+
+ * MUCSE(R) Ethernet Controller N500 series
+ * MUCSE(R) Ethernet Controller N210 series
+
+Support
+=======
+ If you have problems with the software or hardware, please contact our
+ customer support team via email at marketing@mucse.com or check our website
+ at https://www.mucse.com/en/
diff --git a/MAINTAINERS b/MAINTAINERS
index bb9df569a3ff..b43ca711cc5d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16001,11 +16001,7 @@ F:	tools/testing/vma/
 
 MEMORY MAPPING - LOCKING
 M:	Andrew Morton <akpm@linux-foundation.org>
-M:	Suren Baghdasaryan <surenb@google.com>
-M:	Liam R. Howlett <Liam.Howlett@oracle.com>
-M:	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
-R:	Vlastimil Babka <vbabka@suse.cz>
-R:	Shakeel Butt <shakeel.butt@linux.dev>
+M:	Suren Baghdasaryan <surenb@google.com> M:	Liam R. Howlett <Liam.Howlett@oracle.com> M:	Lorenzo Stoakes <lorenzo.stoakes@oracle.com> R:	Vlastimil Babka <vbabka@suse.cz> R:	Shakeel Butt <shakeel.butt@linux.dev>
 L:	linux-mm@kvack.org
 S:	Maintained
 W:	http://www.linux-mm.org
@@ -16986,6 +16982,14 @@ T:	git git://linuxtv.org/media.git
 F:	Documentation/devicetree/bindings/media/i2c/aptina,mt9v111.yaml
 F:	drivers/media/i2c/mt9v111.c
 
+MUCSE ETHERNET DRIVER
+M:	Yibo Dong <dong100@mucse.com>
+L:	netdev@vger.kernel.org
+S:	Maintained
+W:	https://www.mucse.com/en/
+F:	Documentation/networking/device_drivers/ethernet/mucse/*
+F:	drivers/net/ethernet/mucse/*
+
 MULTIFUNCTION DEVICES (MFD)
 M:	Lee Jones <lee@kernel.org>
 S:	Maintained
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
index f86d4557d8d7..77c55fa11942 100644
--- a/drivers/net/ethernet/Kconfig
+++ b/drivers/net/ethernet/Kconfig
@@ -202,5 +202,6 @@ source "drivers/net/ethernet/wangxun/Kconfig"
 source "drivers/net/ethernet/wiznet/Kconfig"
 source "drivers/net/ethernet/xilinx/Kconfig"
 source "drivers/net/ethernet/xircom/Kconfig"
+source "drivers/net/ethernet/mucse/Kconfig"
 
 endif # ETHERNET
diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
index 67182339469a..696825bd1211 100644
--- a/drivers/net/ethernet/Makefile
+++ b/drivers/net/ethernet/Makefile
@@ -107,3 +107,4 @@ obj-$(CONFIG_NET_VENDOR_XIRCOM) += xircom/
 obj-$(CONFIG_NET_VENDOR_SYNOPSYS) += synopsys/
 obj-$(CONFIG_NET_VENDOR_PENSANDO) += pensando/
 obj-$(CONFIG_OA_TC6) += oa_tc6.o
+obj-$(CONFIG_NET_VENDOR_MUCSE) += mucse/
diff --git a/drivers/net/ethernet/mucse/Kconfig b/drivers/net/ethernet/mucse/Kconfig
new file mode 100644
index 000000000000..5825b37fcd50
--- /dev/null
+++ b/drivers/net/ethernet/mucse/Kconfig
@@ -0,0 +1,35 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Mucse network device configuration
+#
+
+config NET_VENDOR_MUCSE
+        bool "Mucse devices"
+        default y
+        help
+          If you have a network (Ethernet) card belonging to this class, say Y.
+
+          Note that the answer to this question doesn't directly affect the
+          kernel: saying N will just cause the configurator to skip all
+          the questions about Mucse cards. If you say Y, you will be asked for
+          your specific card in the following questions.
+
+
+if NET_VENDOR_MUCSE
+
+config MGBE
+	tristate "Mucse(R) 1GbE PCI Express adapters support"
+        depends on PCI
+	select PAGE_POOL
+        help
+          This driver supports Mucse(R) 1GbE PCI Express family of
+          adapters.
+
+	  More specific information on configuring the driver is in
+	  <file:Documentation/networking/device_drivers/ethernet/mucse/rnpgbe.rst>.
+
+          To compile this driver as a module, choose M here. The module
+          will be called rnpgbe.
+
+endif # NET_VENDOR_MUCSE
+
diff --git a/drivers/net/ethernet/mucse/Makefile b/drivers/net/ethernet/mucse/Makefile
new file mode 100644
index 000000000000..f0bd79882488
--- /dev/null
+++ b/drivers/net/ethernet/mucse/Makefile
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the Mucse(R) network device drivers.
+#
+
+obj-$(CONFIG_MGBE) += rnpgbe/
+
diff --git a/drivers/net/ethernet/mucse/rnpgbe/Makefile b/drivers/net/ethernet/mucse/rnpgbe/Makefile
new file mode 100644
index 000000000000..0942e27f5913
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright(c) 2020 - 2025 MUCSE Corporation.
+#
+# Makefile for the MUCSE(R) 1GbE PCI Express ethernet driver
+#
+
+obj-$(CONFIG_MGBE) += rnpgbe.o
+
+rnpgbe-objs := rnpgbe_main.o
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
new file mode 100644
index 000000000000..a44e6b6255d8
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#ifndef _RNPGBE_H
+#define _RNPGBE_H
+
+#define RNPGBE_MAX_QUEUES (8)
+
+enum rnpgbe_boards {
+	board_n500,
+	board_n210,
+	board_n210L,
+};
+
+struct mucse {
+	struct net_device *netdev;
+	struct pci_dev *pdev;
+	/* board number */
+	u16 bd_number;
+
+	char name[60];
+};
+
+/* Device IDs */
+#ifndef PCI_VENDOR_ID_MUCSE
+#define PCI_VENDOR_ID_MUCSE 0x8848
+#endif /* PCI_VENDOR_ID_MUCSE */
+
+#define PCI_DEVICE_ID_N500_QUAD_PORT 0x8308
+#define PCI_DEVICE_ID_N500_DUAL_PORT 0x8318
+#define PCI_DEVICE_ID_N500_VF 0x8309
+#define PCI_DEVICE_ID_N210 0x8208
+#define PCI_DEVICE_ID_N210L 0x820a
+
+#endif /* _RNPGBE_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
new file mode 100644
index 000000000000..b32b70c98b46
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -0,0 +1,221 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/string.h>
+#include <linux/etherdevice.h>
+
+#include "rnpgbe.h"
+
+char rnpgbe_driver_name[] = "rnpgbe";
+static const char rnpgbe_driver_string[] =
+	"mucse 1 Gigabit PCI Express Network Driver";
+#define DRV_VERSION "1.0.0"
+const char rnpgbe_driver_version[] = DRV_VERSION;
+static const char rnpgbe_copyright[] =
+	"Copyright (c) 2020-2025 mucse Corporation.";
+
+/* rnpgbe_pci_tbl - PCI Device ID Table
+ *
+ * { PCI_DEVICE(Vendor ID, Device ID),
+ *   driver_data (used for different hw chip) }
+ */
+static struct pci_device_id rnpgbe_pci_tbl[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_MUCSE, PCI_DEVICE_ID_N500_QUAD_PORT),
+	  .driver_data = board_n500},
+	{ PCI_DEVICE(PCI_VENDOR_ID_MUCSE, PCI_DEVICE_ID_N500_DUAL_PORT),
+	  .driver_data = board_n500},
+	{ PCI_DEVICE(PCI_VENDOR_ID_MUCSE, PCI_DEVICE_ID_N210),
+	  .driver_data = board_n210},
+	{ PCI_DEVICE(PCI_VENDOR_ID_MUCSE, PCI_DEVICE_ID_N210L),
+	  .driver_data = board_n210L},
+	/* required last entry */
+	{0, },
+};
+
+/**
+ * rnpgbe_add_adpater - add netdev for this pci_dev
+ * @pdev: PCI device information structure
+ *
+ * rnpgbe_add_adpater initializes a netdev for this pci_dev
+ * structure. Initializes Bar map, private structure, and a
+ * hardware reset occur.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int rnpgbe_add_adpater(struct pci_dev *pdev)
+{
+	struct mucse *mucse = NULL;
+	struct net_device *netdev;
+	static int bd_number;
+
+	pr_info("====  add rnpgbe queues:%d ====", RNPGBE_MAX_QUEUES);
+	netdev = alloc_etherdev_mq(sizeof(struct mucse), RNPGBE_MAX_QUEUES);
+	if (!netdev)
+		return -ENOMEM;
+
+	mucse = netdev_priv(netdev);
+	memset((char *)mucse, 0x00, sizeof(struct mucse));
+	mucse->netdev = netdev;
+	mucse->pdev = pdev;
+	mucse->bd_number = bd_number++;
+	snprintf(mucse->name, sizeof(netdev->name), "%s%d",
+		 rnpgbe_driver_name, mucse->bd_number);
+	pci_set_drvdata(pdev, mucse);
+
+	return 0;
+}
+
+/**
+ * rnpgbe_probe - Device Initialization Routine
+ * @pdev: PCI device information struct
+ * @id: entry in rnpgbe_pci_tbl
+ *
+ * rnpgbe_probe initializes a PF adapter identified by a pci_dev
+ * structure. The OS initialization, then call rnpgbe_add_adpater
+ * to initializes netdev.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int rnpgbe_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	int err;
+
+	err = pci_enable_device_mem(pdev);
+	if (err)
+		return err;
+
+	/* hw only support 56-bits dma mask */
+	err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(56));
+	if (err) {
+		dev_err(&pdev->dev,
+			"No usable DMA configuration, aborting\n");
+		goto err_dma;
+	}
+
+	err = pci_request_mem_regions(pdev, rnpgbe_driver_name);
+	if (err) {
+		dev_err(&pdev->dev,
+			"pci_request_selected_regions failed 0x%x\n", err);
+		goto err_pci_req;
+	}
+
+	pci_set_master(pdev);
+	pci_save_state(pdev);
+	err = rnpgbe_add_adpater(pdev);
+	if (err)
+		goto err_regions;
+
+	return 0;
+err_regions:
+	pci_release_mem_regions(pdev);
+err_dma:
+err_pci_req:
+	pci_disable_device(pdev);
+	return err;
+}
+
+/**
+ * rnpgbe_rm_adpater - remove netdev for this mucse structure
+ * @mucse: pointer to private structure
+ *
+ * rnpgbe_rm_adpater remove a netdev for this mucse structure
+ **/
+static void rnpgbe_rm_adpater(struct mucse *mucse)
+{
+	struct net_device *netdev;
+
+	netdev = mucse->netdev;
+	pr_info("= remove rnpgbe:%s =\n", netdev->name);
+	free_netdev(netdev);
+	pr_info("remove complete\n");
+}
+
+/**
+ * rnpgbe_remove - Device Removal Routine
+ * @pdev: PCI device information struct
+ *
+ * rnpgbe_remove is called by the PCI subsystem to alert the driver
+ * that it should release a PCI device.  This could be caused by a
+ * Hot-Plug event, or because the driver is going to be removed from
+ * memory.
+ **/
+static void rnpgbe_remove(struct pci_dev *pdev)
+{
+	struct mucse *mucse = pci_get_drvdata(pdev);
+
+	if (!mucse)
+		return;
+
+	rnpgbe_rm_adpater(mucse);
+	pci_release_mem_regions(pdev);
+	pci_disable_device(pdev);
+}
+
+static void __rnpgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
+{
+	struct mucse *mucse = pci_get_drvdata(pdev);
+	struct net_device *netdev = mucse->netdev;
+
+	*enable_wake = false;
+	netif_device_detach(netdev);
+	pci_disable_device(pdev);
+}
+
+/**
+ * rnpgbe_shutdown - Device Shutdown Routine
+ * @pdev: PCI device information struct
+ *
+ * rnpgbe_shutdown is called by the PCI subsystem to alert the driver
+ * that os shutdown. Device should setup wakeup state here.
+ **/
+static void rnpgbe_shutdown(struct pci_dev *pdev)
+{
+	bool wake = false;
+
+	__rnpgbe_shutdown(pdev, &wake);
+
+	if (system_state == SYSTEM_POWER_OFF) {
+		pci_wake_from_d3(pdev, wake);
+		pci_set_power_state(pdev, PCI_D3hot);
+	}
+}
+
+static struct pci_driver rnpgbe_driver = {
+	.name = rnpgbe_driver_name,
+	.id_table = rnpgbe_pci_tbl,
+	.probe = rnpgbe_probe,
+	.remove = rnpgbe_remove,
+	.shutdown = rnpgbe_shutdown,
+};
+
+static int __init rnpgbe_init_module(void)
+{
+	int ret;
+
+	pr_info("%s - version %s\n", rnpgbe_driver_string,
+		rnpgbe_driver_version);
+	pr_info("%s\n", rnpgbe_copyright);
+	ret = pci_register_driver(&rnpgbe_driver);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+module_init(rnpgbe_init_module);
+
+static void __exit rnpgbe_exit_module(void)
+{
+	pci_unregister_driver(&rnpgbe_driver);
+}
+
+module_exit(rnpgbe_exit_module);
+
+MODULE_DEVICE_TABLE(pci, rnpgbe_pci_tbl);
+MODULE_AUTHOR("Mucse Corporation, <mucse@mucse.com>");
+MODULE_DESCRIPTION("Mucse(R) 1 Gigabit PCI Express Network Driver");
+MODULE_LICENSE("GPL");
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 02/15] net: rnpgbe: Add n500/n210 chip support
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
  2025-07-03  1:48 ` [PATCH 01/15] net: rnpgbe: Add build support for rnpgbe Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-04 18:03   ` Andrew Lunn
  2025-07-03  1:48 ` [PATCH 03/15] net: rnpgbe: Add basic mbx ops support Dong Yibo
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize n500/n210 chip bar resource map and
dma, eth, mbx ... info for future use.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/Makefile    |   4 +-
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    | 139 +++++++++++++++++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   | 126 ++++++++++++++++
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h |  27 ++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  83 ++++++++++-
 5 files changed, 372 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h

diff --git a/drivers/net/ethernet/mucse/rnpgbe/Makefile b/drivers/net/ethernet/mucse/rnpgbe/Makefile
index 0942e27f5913..42c359f459d9 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/Makefile
+++ b/drivers/net/ethernet/mucse/rnpgbe/Makefile
@@ -5,5 +5,5 @@
 #
 
 obj-$(CONFIG_MGBE) += rnpgbe.o
-
-rnpgbe-objs := rnpgbe_main.o
+rnpgbe-objs := rnpgbe_main.o\
+	       rnpgbe_chip.o
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index a44e6b6255d8..664b6c86819a 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -4,7 +4,12 @@
 #ifndef _RNPGBE_H
 #define _RNPGBE_H
 
-#define RNPGBE_MAX_QUEUES (8)
+#include <linux/types.h>
+#include <linux/netdevice.h>
+
+extern const struct rnpgbe_info rnpgbe_n500_info;
+extern const struct rnpgbe_info rnpgbe_n210_info;
+extern const struct rnpgbe_info rnpgbe_n210L_info;
 
 enum rnpgbe_boards {
 	board_n500,
@@ -12,15 +17,144 @@ enum rnpgbe_boards {
 	board_n210L,
 };
 
+enum rnpgbe_hw_type {
+	rnpgbe_hw_n500 = 0,
+	rnpgbe_hw_n210,
+	rnpgbe_hw_n210L,
+};
+
+struct mucse_dma_info {
+	u8 __iomem *dma_base_addr;
+	u8 __iomem *dma_ring_addr;
+	void *back;
+	u32 max_tx_queues;
+	u32 max_rx_queues;
+	u32 dma_version;
+};
+
+#define RNPGBE_MAX_MTA 128
+struct mucse_eth_info {
+	u8 __iomem *eth_base_addr;
+	void *back;
+	u32 mta_shadow[RNPGBE_MAX_MTA];
+	s32 mc_filter_type;
+	u32 mcft_size;
+	u32 vft_size;
+	u32 num_rar_entries;
+};
+
+struct mii_regs {
+	unsigned int addr; /* MII Address */
+	unsigned int data; /* MII Data */
+	unsigned int addr_shift; /* MII address shift */
+	unsigned int reg_shift; /* MII reg shift */
+	unsigned int addr_mask; /* MII address mask */
+	unsigned int reg_mask; /* MII reg mask */
+	unsigned int clk_csr_shift;
+	unsigned int clk_csr_mask;
+};
+
+struct mucse_mac_info {
+	u8 __iomem *mac_addr;
+	void *back;
+	struct mii_regs mii;
+	int phy_addr;
+	int clk_csr;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+};
+
+#define MAX_VF_NUM (8)
+
+struct mucse_mbx_info {
+	u32 timeout;
+	u32 usec_delay;
+	u32 v2p_mailbox;
+	u16 size;
+	u16 vf_req[MAX_VF_NUM];
+	u16 vf_ack[MAX_VF_NUM];
+	u16 cpu_req;
+	u16 cpu_ack;
+	/* lock for only one user */
+	struct mutex lock;
+	bool other_irq_enabled;
+	int mbx_size;
+	int mbx_mem_size;
+#define MBX_FEATURE_NO_ZERO BIT(0)
+#define MBX_FEATURE_WRITE_DELAY BIT(1)
+	u32 mbx_feature;
+	/* cm3 <-> pf mbx */
+	u32 cpu_pf_shm_base;
+	u32 pf2cpu_mbox_ctrl;
+	u32 pf2cpu_mbox_mask;
+	u32 cpu_pf_mbox_mask;
+	u32 cpu2pf_mbox_vec;
+	/* pf <--> vf mbx */
+	u32 pf_vf_shm_base;
+	u32 pf2vf_mbox_ctrl_base;
+	u32 pf_vf_mbox_mask_lo;
+	u32 pf_vf_mbox_mask_hi;
+	u32 pf2vf_mbox_vec_base;
+	u32 vf2pf_mbox_vec_base;
+	u32 cpu_vf_share_ram;
+	int share_size;
+};
+
+struct mucse_hw {
+	void *back;
+	u8 pfvfnum;
+	u8 pfvfnum_system;
+	u8 __iomem *hw_addr;
+	u8 __iomem *ring_msix_base;
+	struct pci_dev *pdev;
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	enum rnpgbe_hw_type hw_type;
+	struct mucse_dma_info dma;
+	struct mucse_eth_info eth;
+	struct mucse_mac_info mac;
+	struct mucse_mbx_info mbx;
+#define M_NET_FEATURE_SG ((u32)(1 << 0))
+#define M_NET_FEATURE_TX_CHECKSUM ((u32)(1 << 1))
+#define M_NET_FEATURE_RX_CHECKSUM ((u32)(1 << 2))
+#define M_NET_FEATURE_TSO ((u32)(1 << 3))
+#define M_NET_FEATURE_TX_UDP_TUNNEL ((u32)(1 << 4))
+#define M_NET_FEATURE_VLAN_FILTER ((u32)(1 << 5))
+#define M_NET_FEATURE_VLAN_OFFLOAD ((u32)(1 << 6))
+#define M_NET_FEATURE_RX_NTUPLE_FILTER ((u32)(1 << 7))
+#define M_NET_FEATURE_TCAM ((u32)(1 << 8))
+#define M_NET_FEATURE_RX_HASH ((u32)(1 << 9))
+#define M_NET_FEATURE_RX_FCS ((u32)(1 << 10))
+#define M_NET_FEATURE_HW_TC ((u32)(1 << 11))
+#define M_NET_FEATURE_USO ((u32)(1 << 12))
+#define M_NET_FEATURE_STAG_FILTER ((u32)(1 << 13))
+#define M_NET_FEATURE_STAG_OFFLOAD ((u32)(1 << 14))
+#define M_NET_FEATURE_VF_FIXED ((u32)(1 << 15))
+#define M_VEB_VLAN_MASK_EN ((u32)(1 << 16))
+#define M_HW_FEATURE_EEE ((u32)(1 << 17))
+#define M_HW_SOFT_MASK_OTHER_IRQ ((u32)(1 << 18))
+	u32 feature_flags;
+	u16 usecstocount;
+};
+
 struct mucse {
 	struct net_device *netdev;
 	struct pci_dev *pdev;
+	struct mucse_hw hw;
 	/* board number */
 	u16 bd_number;
 
 	char name[60];
 };
 
+struct rnpgbe_info {
+	int total_queue_pair_cnts;
+	enum rnpgbe_hw_type hw_type;
+	void (*get_invariants)(struct mucse_hw *hw);
+};
+
 /* Device IDs */
 #ifndef PCI_VENDOR_ID_MUCSE
 #define PCI_VENDOR_ID_MUCSE 0x8848
@@ -32,4 +166,7 @@ struct mucse {
 #define PCI_DEVICE_ID_N210 0x8208
 #define PCI_DEVICE_ID_N210L 0x820a
 
+#define rnpgbe_rd_reg(reg) readl((void *)(reg))
+#define rnpgbe_wr_reg(reg, val) writel((val), (void *)(reg))
+
 #endif /* _RNPGBE_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
new file mode 100644
index 000000000000..5580298eabb6
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#include <linux/types.h>
+#include <linux/string.h>
+
+#include "rnpgbe.h"
+#include "rnpgbe_hw.h"
+
+/**
+ * rnpgbe_get_invariants_n500 - setup for hw info
+ * @hw: hw information structure
+ *
+ * rnpgbe_get_invariants_n500 initializes all private
+ * structure, such as dma, eth, mac and mbx base on
+ * hw->addr
+ *
+ **/
+static void rnpgbe_get_invariants_n500(struct mucse_hw *hw)
+{
+	struct mucse_dma_info *dma = &hw->dma;
+	struct mucse_eth_info *eth = &hw->eth;
+	struct mucse_mac_info *mac = &hw->mac;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+
+	/* setup msix base */
+	hw->ring_msix_base = hw->hw_addr + 0x28700;
+	/* setup dma info */
+	dma->dma_base_addr = hw->hw_addr;
+	dma->dma_ring_addr = hw->hw_addr + RNPGBE_RING_BASE;
+	dma->max_tx_queues = RNPGBE_MAX_QUEUES;
+	dma->max_rx_queues = RNPGBE_MAX_QUEUES;
+	dma->back = hw;
+	/* setup eth info */
+	eth->eth_base_addr = hw->hw_addr + RNPGBE_ETH_BASE;
+	eth->back = hw;
+	eth->mc_filter_type = 0;
+	eth->mcft_size = RNPGBE_MC_TBL_SIZE;
+	eth->vft_size = RNPGBE_VFT_TBL_SIZE;
+	eth->num_rar_entries = RNPGBE_RAR_ENTRIES;
+	/* setup mac info */
+	mac->mac_addr = hw->hw_addr + RNPGBE_MAC_BASE;
+	mac->back = hw;
+	/* set mac->mii */
+	mac->mii.addr = RNPGBE_MII_ADDR;
+	mac->mii.data = RNPGBE_MII_DATA;
+	mac->mii.addr_shift = 11;
+	mac->mii.addr_mask = 0x0000F800;
+	mac->mii.reg_shift = 6;
+	mac->mii.reg_mask = 0x000007C0;
+	mac->mii.clk_csr_shift = 2;
+	mac->mii.clk_csr_mask = GENMASK(5, 2);
+	mac->clk_csr = 0x02; /* csr 25M */
+	/* hw fixed phy_addr */
+	mac->phy_addr = 0x11;
+
+	mbx->mbx_feature |= MBX_FEATURE_NO_ZERO;
+	/* mbx offset */
+	mbx->vf2pf_mbox_vec_base = 0x28900;
+	mbx->cpu2pf_mbox_vec = 0x28b00;
+	mbx->pf_vf_shm_base = 0x29000;
+	mbx->mbx_mem_size = 64;
+	mbx->pf2vf_mbox_ctrl_base = 0x2a100;
+	mbx->pf_vf_mbox_mask_lo = 0x2a200;
+	mbx->pf_vf_mbox_mask_hi = 0;
+	mbx->cpu_pf_shm_base = 0x2d000;
+	mbx->pf2cpu_mbox_ctrl = 0x2e000;
+	mbx->cpu_pf_mbox_mask = 0x2e200;
+	mbx->cpu_vf_share_ram = 0x2b000;
+	mbx->share_size = 512;
+
+	/* setup net feature here */
+	hw->feature_flags |=
+		M_NET_FEATURE_SG | M_NET_FEATURE_TX_CHECKSUM |
+		M_NET_FEATURE_RX_CHECKSUM | M_NET_FEATURE_TSO |
+		M_NET_FEATURE_VLAN_FILTER | M_NET_FEATURE_VLAN_OFFLOAD |
+		M_NET_FEATURE_RX_NTUPLE_FILTER | M_NET_FEATURE_RX_HASH |
+		M_NET_FEATURE_USO | M_NET_FEATURE_RX_FCS |
+		M_NET_FEATURE_STAG_FILTER | M_NET_FEATURE_STAG_OFFLOAD;
+	/* start the default ahz, update later*/
+	hw->usecstocount = 125;
+}
+
+static void rnpgbe_get_invariants_n210(struct mucse_hw *hw)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	/* get invariants based from n500 */
+	rnpgbe_get_invariants_n500(hw);
+
+	/* update msix base */
+	hw->ring_msix_base = hw->hw_addr + 0x29000;
+	/* update mbx offset */
+	mbx->vf2pf_mbox_vec_base = 0x29200;
+	mbx->cpu2pf_mbox_vec = 0x29400;
+	mbx->pf_vf_shm_base = 0x29900;
+	mbx->mbx_mem_size = 64;
+	mbx->pf2vf_mbox_ctrl_base = 0x2aa00;
+	mbx->pf_vf_mbox_mask_lo = 0x2ab00;
+	mbx->pf_vf_mbox_mask_hi = 0;
+	mbx->cpu_pf_shm_base = 0x2d900;
+	mbx->pf2cpu_mbox_ctrl = 0x2e900;
+	mbx->cpu_pf_mbox_mask = 0x2eb00;
+	mbx->cpu_vf_share_ram = 0x2b900;
+	mbx->share_size = 512;
+	/* update hw feature */
+	hw->feature_flags |= M_HW_FEATURE_EEE;
+	hw->usecstocount = 62;
+}
+
+const struct rnpgbe_info rnpgbe_n500_info = {
+	.total_queue_pair_cnts = RNPGBE_MAX_QUEUES,
+	.hw_type = rnpgbe_hw_n500,
+	.get_invariants = &rnpgbe_get_invariants_n500,
+};
+
+const struct rnpgbe_info rnpgbe_n210_info = {
+	.total_queue_pair_cnts = RNPGBE_MAX_QUEUES,
+	.hw_type = rnpgbe_hw_n210,
+	.get_invariants = &rnpgbe_get_invariants_n210,
+};
+
+const struct rnpgbe_info rnpgbe_n210L_info = {
+	.total_queue_pair_cnts = RNPGBE_MAX_QUEUES,
+	.hw_type = rnpgbe_hw_n210L,
+	.get_invariants = &rnpgbe_get_invariants_n210,
+};
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
new file mode 100644
index 000000000000..2c7372a5e88d
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#ifndef _RNPGBE_HW_H
+#define _RNPGBE_HW_H
+/*                     BAR                   */
+/* ----------------------------------------- */
+/*      module  | size  |  start   |    end  */
+/*      DMA     | 32KB  | 0_0000H  | 0_7FFFH */
+/*      ETH     | 64KB  | 1_0000H  | 1_FFFFH */
+/*      MAC     | 32KB  | 2_0000H  | 2_7FFFH */
+/*      MSIX    | 32KB  | 2_8000H  | 2_FFFFH */
+
+#define RNPGBE_RING_BASE (0x1000)
+#define RNPGBE_MAC_BASE (0x20000)
+#define RNPGBE_ETH_BASE (0x10000)
+/* chip resourse */
+#define RNPGBE_MAX_QUEUES (8)
+/* multicast control table */
+#define RNPGBE_MC_TBL_SIZE (128)
+/* vlan filter table */
+#define RNPGBE_VFT_TBL_SIZE (128)
+#define RNPGBE_RAR_ENTRIES (32)
+
+#define RNPGBE_MII_ADDR 0x00000010 /* MII Address */
+#define RNPGBE_MII_DATA 0x00000014 /* MII Data */
+#endif /* _RNPGBE_HW_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index b32b70c98b46..30c5a4874929 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -17,6 +17,11 @@ static const char rnpgbe_driver_string[] =
 const char rnpgbe_driver_version[] = DRV_VERSION;
 static const char rnpgbe_copyright[] =
 	"Copyright (c) 2020-2025 mucse Corporation.";
+static const struct rnpgbe_info *rnpgbe_info_tbl[] = {
+	[board_n500] = &rnpgbe_n500_info,
+	[board_n210] = &rnpgbe_n210_info,
+	[board_n210L] = &rnpgbe_n210L_info,
+};
 
 /* rnpgbe_pci_tbl - PCI Device ID Table
  *
@@ -36,9 +41,24 @@ static struct pci_device_id rnpgbe_pci_tbl[] = {
 	{0, },
 };
 
+/**
+ * init_firmware_for_n210 - download firmware
+ * @hw: hardware structure
+ *
+ * init_firmware_for_n210 try to download firmware
+ * for n210, by bar0(hw->hw_addr).
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int init_firmware_for_n210(struct mucse_hw *hw)
+{
+	return 0;
+}
+
 /**
  * rnpgbe_add_adpater - add netdev for this pci_dev
  * @pdev: PCI device information structure
+ * @ii: chip info structure
  *
  * rnpgbe_add_adpater initializes a netdev for this pci_dev
  * structure. Initializes Bar map, private structure, and a
@@ -46,17 +66,24 @@ static struct pci_device_id rnpgbe_pci_tbl[] = {
  *
  * Returns 0 on success, negative on failure
  **/
-static int rnpgbe_add_adpater(struct pci_dev *pdev)
+static int rnpgbe_add_adpater(struct pci_dev *pdev,
+			      const struct rnpgbe_info *ii)
 {
+	int err = 0;
 	struct mucse *mucse = NULL;
 	struct net_device *netdev;
+	struct mucse_hw *hw = NULL;
+	u8 __iomem *hw_addr = NULL;
+	u32 dma_version = 0;
 	static int bd_number;
+	u32 queues = ii->total_queue_pair_cnts;
 
-	pr_info("====  add rnpgbe queues:%d ====", RNPGBE_MAX_QUEUES);
-	netdev = alloc_etherdev_mq(sizeof(struct mucse), RNPGBE_MAX_QUEUES);
+	pr_info("====  add rnpgbe queues:%d ====", queues);
+	netdev = alloc_etherdev_mq(sizeof(struct mucse), queues);
 	if (!netdev)
 		return -ENOMEM;
 
+	SET_NETDEV_DEV(netdev, &pdev->dev);
 	mucse = netdev_priv(netdev);
 	memset((char *)mucse, 0x00, sizeof(struct mucse));
 	mucse->netdev = netdev;
@@ -66,7 +93,54 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev)
 		 rnpgbe_driver_name, mucse->bd_number);
 	pci_set_drvdata(pdev, mucse);
 
+	hw = &mucse->hw;
+	hw->back = mucse;
+	hw->hw_type = ii->hw_type;
+
+	switch (hw->hw_type) {
+	case rnpgbe_hw_n500:
+		/* n500 use bar2 */
+		hw_addr = devm_ioremap(&pdev->dev,
+				       pci_resource_start(pdev, 2),
+				       pci_resource_len(pdev, 2));
+		if (!hw_addr) {
+			dev_err(&pdev->dev, "map bar2 failed!\n");
+			return -EIO;
+		}
+
+		/* get dma version */
+		dma_version = rnpgbe_rd_reg(hw_addr);
+		break;
+	case rnpgbe_hw_n210:
+	case rnpgbe_hw_n210L:
+		/* check bar0 to load firmware */
+		if (pci_resource_len(pdev, 0) == 0x100000)
+			return init_firmware_for_n210(hw);
+		/* n210 use bar2 */
+		hw_addr = devm_ioremap(&pdev->dev,
+				       pci_resource_start(pdev, 2),
+				       pci_resource_len(pdev, 2));
+		if (!hw_addr) {
+			dev_err(&pdev->dev, "map bar2 failed!\n");
+			return -EIO;
+		}
+
+		/* get dma version */
+		dma_version = rnpgbe_rd_reg(hw_addr);
+		break;
+	default:
+		err = -EIO;
+		goto err_free_net;
+	}
+	hw->hw_addr = hw_addr;
+	hw->dma.dma_version = dma_version;
+	ii->get_invariants(hw);
+
 	return 0;
+
+err_free_net:
+	free_netdev(netdev);
+	return err;
 }
 
 /**
@@ -83,6 +157,7 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev)
 static int rnpgbe_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	int err;
+	const struct rnpgbe_info *ii = rnpgbe_info_tbl[id->driver_data];
 
 	err = pci_enable_device_mem(pdev);
 	if (err)
@@ -105,7 +180,7 @@ static int rnpgbe_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 	pci_set_master(pdev);
 	pci_save_state(pdev);
-	err = rnpgbe_add_adpater(pdev);
+	err = rnpgbe_add_adpater(pdev, ii);
 	if (err)
 		goto err_regions;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 03/15] net: rnpgbe: Add basic mbx ops support
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
  2025-07-03  1:48 ` [PATCH 01/15] net: rnpgbe: Add build support for rnpgbe Dong Yibo
  2025-07-03  1:48 ` [PATCH 02/15] net: rnpgbe: Add n500/n210 chip support Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-04 18:13   ` Andrew Lunn
  2025-07-03  1:48 ` [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw " Dong Yibo
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize basic mbx function.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/Makefile    |   5 +-
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  64 +-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   |  25 +-
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h |   2 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |   1 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c    | 622 ++++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h    |  48 ++
 7 files changed, 745 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h

diff --git a/drivers/net/ethernet/mucse/rnpgbe/Makefile b/drivers/net/ethernet/mucse/rnpgbe/Makefile
index 42c359f459d9..41177103b50c 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/Makefile
+++ b/drivers/net/ethernet/mucse/rnpgbe/Makefile
@@ -5,5 +5,6 @@
 #
 
 obj-$(CONFIG_MGBE) += rnpgbe.o
-rnpgbe-objs := rnpgbe_main.o\
-	       rnpgbe_chip.o
+rnpgbe-objs := rnpgbe_main.o \
+	       rnpgbe_chip.o \
+	       rnpgbe_mbx.o
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 664b6c86819a..4cafab16f5bf 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -64,18 +64,60 @@ struct mucse_mac_info {
 	u8 perm_addr[ETH_ALEN];
 };
 
+struct mucse_hw;
+
+enum MBX_ID {
+	MBX_VF0 = 0,
+	MBX_VF1,
+	MBX_VF2,
+	MBX_VF3,
+	MBX_VF4,
+	MBX_VF5,
+	MBX_VF6,
+	MBX_VF7,
+	MBX_CM3CPU,
+	MBX_FW = MBX_CM3CPU,
+	MBX_VFCNT
+};
+
+struct mucse_mbx_operations {
+	s32 (*init_params)(struct mucse_hw *hw);
+	s32 (*read)(struct mucse_hw *hw, u32 *msg,
+		    u16 size, enum MBX_ID id);
+	s32 (*write)(struct mucse_hw *hw, u32 *msg,
+		     u16 size, enum MBX_ID id);
+	s32 (*read_posted)(struct mucse_hw *hw, u32 *msg,
+			   u16 size, enum MBX_ID id);
+	s32 (*write_posted)(struct mucse_hw *hw, u32 *msg,
+			    u16 size, enum MBX_ID id);
+	s32 (*check_for_msg)(struct mucse_hw *hw, enum MBX_ID id);
+	s32 (*check_for_ack)(struct mucse_hw *hw, enum MBX_ID id);
+	s32 (*configure)(struct mucse_hw *hw, int num_vec,
+			 bool enable);
+};
+
+struct mucse_mbx_stats {
+	u32 msgs_tx;
+	u32 msgs_rx;
+	u32 acks;
+	u32 reqs;
+	u32 rsts;
+};
+
 #define MAX_VF_NUM (8)
 
 struct mucse_mbx_info {
+	struct mucse_mbx_operations ops;
+	struct mucse_mbx_stats stats;
 	u32 timeout;
 	u32 usec_delay;
 	u32 v2p_mailbox;
 	u16 size;
 	u16 vf_req[MAX_VF_NUM];
 	u16 vf_ack[MAX_VF_NUM];
-	u16 cpu_req;
-	u16 cpu_ack;
-	/* lock for only one user */
+	u16 fw_req;
+	u16 fw_ack;
+	/* lock for only one use mbx */
 	struct mutex lock;
 	bool other_irq_enabled;
 	int mbx_size;
@@ -84,11 +126,11 @@ struct mucse_mbx_info {
 #define MBX_FEATURE_WRITE_DELAY BIT(1)
 	u32 mbx_feature;
 	/* cm3 <-> pf mbx */
-	u32 cpu_pf_shm_base;
-	u32 pf2cpu_mbox_ctrl;
-	u32 pf2cpu_mbox_mask;
-	u32 cpu_pf_mbox_mask;
-	u32 cpu2pf_mbox_vec;
+	u32 fw_pf_shm_base;
+	u32 pf2fw_mbox_ctrl;
+	u32 pf2fw_mbox_mask;
+	u32 fw_pf_mbox_mask;
+	u32 fw2pf_mbox_vec;
 	/* pf <--> vf mbx */
 	u32 pf_vf_shm_base;
 	u32 pf2vf_mbox_ctrl_base;
@@ -96,10 +138,12 @@ struct mucse_mbx_info {
 	u32 pf_vf_mbox_mask_hi;
 	u32 pf2vf_mbox_vec_base;
 	u32 vf2pf_mbox_vec_base;
-	u32 cpu_vf_share_ram;
+	u32 fw_vf_share_ram;
 	int share_size;
 };
 
+#include "rnpgbe_mbx.h"
+
 struct mucse_hw {
 	void *back;
 	u8 pfvfnum;
@@ -111,6 +155,8 @@ struct mucse_hw {
 	u16 vendor_id;
 	u16 subsystem_device_id;
 	u16 subsystem_vendor_id;
+	int max_vfs;
+	int max_vfs_noari;
 	enum rnpgbe_hw_type hw_type;
 	struct mucse_dma_info dma;
 	struct mucse_eth_info eth;
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
index 5580298eabb6..08d082fa3066 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -6,6 +6,7 @@
 
 #include "rnpgbe.h"
 #include "rnpgbe_hw.h"
+#include "rnpgbe_mbx.h"
 
 /**
  * rnpgbe_get_invariants_n500 - setup for hw info
@@ -57,18 +58,18 @@ static void rnpgbe_get_invariants_n500(struct mucse_hw *hw)
 	mbx->mbx_feature |= MBX_FEATURE_NO_ZERO;
 	/* mbx offset */
 	mbx->vf2pf_mbox_vec_base = 0x28900;
-	mbx->cpu2pf_mbox_vec = 0x28b00;
+	mbx->fw2pf_mbox_vec = 0x28b00;
 	mbx->pf_vf_shm_base = 0x29000;
 	mbx->mbx_mem_size = 64;
 	mbx->pf2vf_mbox_ctrl_base = 0x2a100;
 	mbx->pf_vf_mbox_mask_lo = 0x2a200;
 	mbx->pf_vf_mbox_mask_hi = 0;
-	mbx->cpu_pf_shm_base = 0x2d000;
-	mbx->pf2cpu_mbox_ctrl = 0x2e000;
-	mbx->cpu_pf_mbox_mask = 0x2e200;
-	mbx->cpu_vf_share_ram = 0x2b000;
+	mbx->fw_pf_shm_base = 0x2d000;
+	mbx->pf2fw_mbox_ctrl = 0x2e000;
+	mbx->fw_pf_mbox_mask = 0x2e200;
+	mbx->fw_vf_share_ram = 0x2b000;
 	mbx->share_size = 512;
-
+	memcpy(&hw->mbx.ops, &mucse_mbx_ops_generic, sizeof(hw->mbx.ops));
 	/* setup net feature here */
 	hw->feature_flags |=
 		M_NET_FEATURE_SG | M_NET_FEATURE_TX_CHECKSUM |
@@ -79,6 +80,7 @@ static void rnpgbe_get_invariants_n500(struct mucse_hw *hw)
 		M_NET_FEATURE_STAG_FILTER | M_NET_FEATURE_STAG_OFFLOAD;
 	/* start the default ahz, update later*/
 	hw->usecstocount = 125;
+	hw->max_vfs = 7;
 }
 
 static void rnpgbe_get_invariants_n210(struct mucse_hw *hw)
@@ -91,20 +93,21 @@ static void rnpgbe_get_invariants_n210(struct mucse_hw *hw)
 	hw->ring_msix_base = hw->hw_addr + 0x29000;
 	/* update mbx offset */
 	mbx->vf2pf_mbox_vec_base = 0x29200;
-	mbx->cpu2pf_mbox_vec = 0x29400;
+	mbx->fw2pf_mbox_vec = 0x29400;
 	mbx->pf_vf_shm_base = 0x29900;
 	mbx->mbx_mem_size = 64;
 	mbx->pf2vf_mbox_ctrl_base = 0x2aa00;
 	mbx->pf_vf_mbox_mask_lo = 0x2ab00;
 	mbx->pf_vf_mbox_mask_hi = 0;
-	mbx->cpu_pf_shm_base = 0x2d900;
-	mbx->pf2cpu_mbox_ctrl = 0x2e900;
-	mbx->cpu_pf_mbox_mask = 0x2eb00;
-	mbx->cpu_vf_share_ram = 0x2b900;
+	mbx->fw_pf_shm_base = 0x2d900;
+	mbx->pf2fw_mbox_ctrl = 0x2e900;
+	mbx->fw_pf_mbox_mask = 0x2eb00;
+	mbx->fw_vf_share_ram = 0x2b900;
 	mbx->share_size = 512;
 	/* update hw feature */
 	hw->feature_flags |= M_HW_FEATURE_EEE;
 	hw->usecstocount = 62;
+	hw->max_vfs_noari = 7;
 }
 
 const struct rnpgbe_info rnpgbe_n500_info = {
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
index 2c7372a5e88d..ff7bd9b21550 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
@@ -14,6 +14,8 @@
 #define RNPGBE_RING_BASE (0x1000)
 #define RNPGBE_MAC_BASE (0x20000)
 #define RNPGBE_ETH_BASE (0x10000)
+
+#define RNPGBE_DMA_DUMY (0x000c)
 /* chip resourse */
 #define RNPGBE_MAX_QUEUES (8)
 /* multicast control table */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index 30c5a4874929..e125b609ba09 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -135,6 +135,7 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	hw->hw_addr = hw_addr;
 	hw->dma.dma_version = dma_version;
 	ii->get_invariants(hw);
+	hw->mbx.ops.init_params(hw);
 
 	return 0;
 
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c
new file mode 100644
index 000000000000..f4bfa69fdb41
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.c
@@ -0,0 +1,622 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2022 - 2025 Mucse Corporation. */
+
+#include <linux/pci.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include "rnpgbe.h"
+#include "rnpgbe_mbx.h"
+#include "rnpgbe_hw.h"
+
+/**
+ * mucse_read_mbx - Reads a message from the mailbox
+ * @hw: Pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: Id of vf/fw to read
+ *
+ * returns 0 if it successfully read message or else
+ * MUCSE_ERR_MBX.
+ **/
+s32 mucse_read_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
+		   enum MBX_ID mbx_id)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	s32 ret_val = MUCSE_ERR_MBX;
+
+	/* limit read to size of mailbox */
+	if (size > mbx->size)
+		size = mbx->size;
+
+	if (mbx->ops.read)
+		ret_val = mbx->ops.read(hw, msg, size, mbx_id);
+
+	return ret_val;
+}
+
+/**
+ * mucse_write_mbx - Write a message to the mailbox
+ * @hw: Pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: Id of vf/fw to write
+ *
+ * returns 0 if it successfully write message or else
+ * MUCSE_ERR_MBX.
+ **/
+s32 mucse_write_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
+		    enum MBX_ID mbx_id)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	s32 ret_val = 0;
+
+	if (size > mbx->size)
+		ret_val = MUCSE_ERR_MBX;
+	else if (mbx->ops.write)
+		ret_val = mbx->ops.write(hw, msg, size, mbx_id);
+
+	return ret_val;
+}
+
+static inline u16 mucse_mbx_get_req(struct mucse_hw *hw, int reg)
+{
+	/* force memory barrier */
+	mb();
+	return ioread32(hw->hw_addr + reg) & 0xffff;
+}
+
+static inline u16 mucse_mbx_get_ack(struct mucse_hw *hw, int reg)
+{
+	/* force memory barrier */
+	mb();
+	return (mbx_rd32(hw, reg) >> 16);
+}
+
+static inline void mucse_mbx_inc_pf_req(struct mucse_hw *hw,
+					enum MBX_ID mbx_id)
+{
+	u16 req;
+	u32 reg;
+	u32 v;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+
+	reg = (mbx_id == MBX_FW) ? PF2FW_COUNTER(mbx) :
+				   PF2VF_COUNTER(mbx, mbx_id);
+	v = mbx_rd32(hw, reg);
+	req = (v & 0xffff);
+	req++;
+	v &= ~(0x0000ffff);
+	v |= req;
+	/* force before write to hw */
+	mb();
+	mbx_wr32(hw, reg, v);
+	/* update stats */
+	hw->mbx.stats.msgs_tx++;
+}
+
+static inline void mucse_mbx_inc_pf_ack(struct mucse_hw *hw,
+					enum MBX_ID mbx_id)
+{
+	u16 ack;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	u32 reg = (mbx_id == MBX_FW) ? PF2FW_COUNTER(mbx) :
+				       PF2VF_COUNTER(mbx, mbx_id);
+	u32 v;
+
+	v = mbx_rd32(hw, reg);
+	ack = (v >> 16) & 0xffff;
+	ack++;
+	v &= ~(0xffff0000);
+	v |= (ack << 16);
+	/* force before write to hw */
+	mb();
+	mbx_wr32(hw, reg, v);
+	/* update stats */
+	hw->mbx.stats.msgs_rx++;
+}
+
+/**
+ * mucse_check_for_msg - Checks to see if vf/fw sent us mail
+ * @hw: Pointer to the HW structure
+ * @mbx_id: Id of vf/fw to check
+ *
+ * returns SUCCESS if the Status bit was found or else
+ * MUCSE_ERR_MBX
+ **/
+s32 mucse_check_for_msg(struct mucse_hw *hw, enum MBX_ID mbx_id)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	s32 ret_val = MUCSE_ERR_MBX;
+
+	if (mbx->ops.check_for_msg)
+		ret_val = mbx->ops.check_for_msg(hw, mbx_id);
+
+	return ret_val;
+}
+
+/**
+ * mucse_check_for_ack - Checks to see if vf/fw sent us ACK
+ * @hw: Pointer to the HW structure
+ * @mbx_id: Id of vf/fw to check
+ *
+ * returns SUCCESS if the Status bit was found or
+ * else MUCSE_ERR_MBX
+ **/
+s32 mucse_check_for_ack(struct mucse_hw *hw, enum MBX_ID mbx_id)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	s32 ret_val = MUCSE_ERR_MBX;
+
+	if (mbx->ops.check_for_ack)
+		ret_val = mbx->ops.check_for_ack(hw, mbx_id);
+
+	return ret_val;
+}
+
+/**
+ * mucse_poll_for_msg - Wait for message notification
+ * @hw: Pointer to the HW structure
+ * @mbx_id: Id of vf/fw to poll
+ *
+ * returns 0 if it successfully received a message notification
+ **/
+static s32 mucse_poll_for_msg(struct mucse_hw *hw, enum MBX_ID mbx_id)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	int countdown = mbx->timeout;
+
+	if (!countdown || !mbx->ops.check_for_msg)
+		goto out;
+
+	while (countdown && mbx->ops.check_for_msg(hw, mbx_id)) {
+		countdown--;
+		if (!countdown)
+			break;
+		udelay(mbx->usec_delay);
+	}
+out:
+	return countdown ? 0 : -ETIME;
+}
+
+/**
+ * mucse_poll_for_ack - Wait for message acknowledgment
+ * @hw: Pointer to the HW structure
+ * @mbx_id: Id of vf/fw to poll
+ *
+ * returns 0 if it successfully received a message acknowledgment
+ **/
+static s32 mucse_poll_for_ack(struct mucse_hw *hw, enum MBX_ID mbx_id)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	int countdown = mbx->timeout;
+
+	if (!countdown || !mbx->ops.check_for_ack)
+		goto out;
+
+	while (countdown && mbx->ops.check_for_ack(hw, mbx_id)) {
+		countdown--;
+		if (!countdown)
+			break;
+		udelay(mbx->usec_delay);
+	}
+
+out:
+	return countdown ? 0 : MUCSE_ERR_MBX;
+}
+
+/**
+ * mucse_read_posted_mbx - Wait for message notification and receive message
+ * @hw: Pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: Id of vf/fw to read
+ *
+ * returns 0 if it successfully received a message notification and
+ * copied it into the receive buffer.
+ **/
+static s32 mucse_read_posted_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
+				 enum MBX_ID mbx_id)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	s32 ret_val = MUCSE_ERR_MBX;
+
+	if (!mbx->ops.read)
+		goto out;
+
+	ret_val = mucse_poll_for_msg(hw, mbx_id);
+
+	/* if ack received read message, otherwise we timed out */
+	if (!ret_val)
+		ret_val = mbx->ops.read(hw, msg, size, mbx_id);
+out:
+	return ret_val;
+}
+
+/**
+ * mucse_write_posted_mbx - Write a message to the mailbox, wait for ack
+ * @hw: Pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: Id of vf/fw to write
+ *
+ * returns 0 if it successfully copied message into the buffer and
+ * received an ack to that message within delay * timeout period
+ **/
+static s32 mucse_write_posted_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
+				  enum MBX_ID mbx_id)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	s32 ret_val = MUCSE_ERR_MBX;
+
+	/* if pcie off, nothing todo */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+
+	/* exit if either we can't write or there isn't a defined timeout */
+	if (!mbx->ops.write || !mbx->timeout)
+		goto out;
+
+	/* send msg and hold buffer lock */
+	ret_val = mbx->ops.write(hw, msg, size, mbx_id);
+
+	/* if msg sent wait until we receive an ack */
+	if (!ret_val)
+		ret_val = mucse_poll_for_ack(hw, mbx_id);
+
+out:
+	return ret_val;
+}
+
+/**
+ * mucse_check_for_msg_pf - checks to see if the vf/fw has sent mail
+ * @hw: Pointer to the HW structure
+ * @mbx_id: Id of vf/fw to check
+ *
+ * returns 0 if the vf/fw has set the Status bit or else
+ * MUCSE_ERR_MBX
+ **/
+static s32 mucse_check_for_msg_pf(struct mucse_hw *hw,
+				  enum MBX_ID mbx_id)
+{
+	s32 ret_val = MUCSE_ERR_MBX;
+	u16 hw_req_count = 0;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+
+	/* if pcie off, nothing todo */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+
+	if (mbx_id == MBX_FW) {
+		hw_req_count = mucse_mbx_get_req(hw, FW2PF_COUNTER(mbx));
+		/* reg in hw should avoid 0 check */
+		if (mbx->mbx_feature & MBX_FEATURE_NO_ZERO) {
+			if (hw_req_count != 0 &&
+			    hw_req_count != hw->mbx.fw_req) {
+				ret_val = 0;
+				hw->mbx.stats.reqs++;
+			}
+		} else {
+			if (hw_req_count != hw->mbx.fw_req) {
+				ret_val = 0;
+				hw->mbx.stats.reqs++;
+			}
+		}
+	} else {
+		if (mucse_mbx_get_req(hw, VF2PF_COUNTER(mbx, mbx_id)) !=
+		    hw->mbx.vf_req[mbx_id]) {
+			ret_val = 0;
+			hw->mbx.stats.reqs++;
+		}
+	}
+
+	return ret_val;
+}
+
+/**
+ * mucse_check_for_ack_pf - checks to see if the VF has ACKed
+ * @hw: Pointer to the HW structure
+ * @mbx_id: Id of vf/fw to check
+ *
+ * returns SUCCESS if the vf/fw has set the Status bit or else
+ * MUCSE_ERR_MBX
+ **/
+static s32 mucse_check_for_ack_pf(struct mucse_hw *hw, enum MBX_ID mbx_id)
+{
+	s32 ret_val = MUCSE_ERR_MBX;
+	u16 hw_fw_ack = 0;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+
+	/* if pcie off, nothing todo */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+
+	if (mbx_id == MBX_FW) {
+		hw_fw_ack = mucse_mbx_get_ack(hw, FW2PF_COUNTER(mbx));
+		if (hw_fw_ack != 0 &&
+		    hw_fw_ack != hw->mbx.fw_ack) {
+			ret_val = 0;
+			hw->mbx.stats.acks++;
+		}
+	} else {
+		if (mucse_mbx_get_ack(hw, VF2PF_COUNTER(mbx, mbx_id)) !=
+		    hw->mbx.vf_ack[mbx_id]) {
+			ret_val = 0;
+			hw->mbx.stats.acks++;
+		}
+	}
+
+	return ret_val;
+}
+
+/**
+ * mucse_obtain_mbx_lock_pf - obtain mailbox lock
+ * @hw: pointer to the HW structure
+ * @mbx_id: Id of vf/fw to obtain
+ *
+ * return 0 if we obtained the mailbox lock
+ **/
+static s32 mucse_obtain_mbx_lock_pf(struct mucse_hw *hw, enum MBX_ID mbx_id)
+{
+	int try_cnt = 5000;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	u32 CTRL_REG = (mbx_id == MBX_FW) ? PF2FW_MBOX_CTRL(mbx) :
+						PF2VF_MBOX_CTRL(mbx, mbx_id);
+
+	while (try_cnt-- > 0) {
+		/* Take ownership of the buffer */
+		mbx_wr32(hw, CTRL_REG, MBOX_CTRL_PF_HOLD_SHM);
+		/* force write back before check */
+		wmb();
+		/* reserve mailbox for fw use */
+		if (mbx_rd32(hw, CTRL_REG) & MBOX_CTRL_PF_HOLD_SHM)
+			return 0;
+		udelay(100);
+	}
+	return -EPERM;
+}
+
+/**
+ * mucse_write_mbx_pf - Places a message in the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: Id of vf/fw to write
+ *
+ * returns 0 if it successfully copied message into the buffer
+ **/
+static s32 mucse_write_mbx_pf(struct mucse_hw *hw, u32 *msg, u16 size,
+			      enum MBX_ID mbx_id)
+{
+	s32 ret_val = 0;
+	u16 i;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	u32 DATA_REG = (mbx_id == MBX_FW) ? FW_PF_SHM_DATA(mbx) :
+					    PF_VF_SHM_DATA(mbx, mbx_id);
+	u32 CTRL_REG = (mbx_id == MBX_FW) ? PF2FW_MBOX_CTRL(mbx) :
+					    PF2VF_MBOX_CTRL(mbx, mbx_id);
+	/* if pcie is off, we cannot exchange with hw */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+
+	if (size > MUCSE_VFMAILBOX_SIZE)
+		return -EINVAL;
+
+	/* lock the mailbox to prevent pf/vf/fw race condition */
+	ret_val = mucse_obtain_mbx_lock_pf(hw, mbx_id);
+	if (ret_val)
+		goto out_no_write;
+
+	/* copy the caller specified message to the mailbox memory buffer */
+	for (i = 0; i < size; i++)
+		mbx_wr32(hw, DATA_REG + i * 4, msg[i]);
+
+	/* flush msg and acks as we are overwriting the message buffer */
+	if (mbx_id == MBX_FW) {
+		hw->mbx.fw_ack = mucse_mbx_get_ack(hw, FW2PF_COUNTER(mbx));
+	} else {
+		hw->mbx.vf_ack[mbx_id] =
+			mucse_mbx_get_ack(hw, VF2PF_COUNTER(mbx, mbx_id));
+	}
+	mucse_mbx_inc_pf_req(hw, mbx_id);
+
+	/* Interrupt VF/FW to tell it a message
+	 * has been sent and release buffer
+	 */
+	if (mbx->mbx_feature & MBX_FEATURE_WRITE_DELAY)
+		udelay(300);
+	mbx_wr32(hw, CTRL_REG, MBOX_CTRL_REQ);
+
+out_no_write:
+	return ret_val;
+}
+
+/**
+ * mucse_read_mbx_pf - Read a message from the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: Id of vf/fw to read
+ *
+ * This function copies a message from the mailbox buffer to the caller's
+ * memory buffer.  The presumption is that the caller knows that there was
+ * a message due to a vf/fw request so no polling for message is needed.
+ **/
+static s32 mucse_read_mbx_pf(struct mucse_hw *hw, u32 *msg, u16 size,
+			     enum MBX_ID mbx_id)
+{
+	s32 ret_val = -EIO;
+	u32 i;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+	u32 BUF_REG = (mbx_id == MBX_FW) ? FW_PF_SHM_DATA(mbx) :
+					   PF_VF_SHM_DATA(mbx, mbx_id);
+	u32 CTRL_REG = (mbx_id == MBX_FW) ? PF2FW_MBOX_CTRL(mbx) :
+					    PF2VF_MBOX_CTRL(mbx, mbx_id);
+	/* if pcie off, nothing todo */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+	if (size > MUCSE_VFMAILBOX_SIZE)
+		return -EINVAL;
+	/* lock the mailbox to prevent pf/vf race condition */
+	ret_val = mucse_obtain_mbx_lock_pf(hw, mbx_id);
+	if (ret_val)
+		goto out_no_read;
+
+	/* we need this */
+	mb();
+	/* copy the message from the mailbox memory buffer */
+	for (i = 0; i < size; i++)
+		msg[i] = mbx_rd32(hw, BUF_REG + 4 * i);
+	mbx_wr32(hw, BUF_REG, 0);
+
+	/* update req. used by rnpvf_check_for_msg_vf  */
+	if (mbx_id == MBX_FW) {
+		hw->mbx.fw_req = mucse_mbx_get_req(hw, FW2PF_COUNTER(mbx));
+	} else {
+		hw->mbx.vf_req[mbx_id] =
+			mucse_mbx_get_req(hw, VF2PF_COUNTER(mbx, mbx_id));
+	}
+	/* Acknowledge receipt and release mailbox, then we're done */
+	mucse_mbx_inc_pf_ack(hw, mbx_id);
+	/* free ownership of the buffer */
+	mbx_wr32(hw, CTRL_REG, 0);
+
+out_no_read:
+	return ret_val;
+}
+
+/**
+ * mucse_mbx_reset - reset mbx info, sync info from regs
+ * @hw: Pointer to the HW structure
+ *
+ * This function reset all mbx variables to default.
+ **/
+static void mucse_mbx_reset(struct mucse_hw *hw)
+{
+	int idx, v;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+
+	for (idx = 0; idx < hw->max_vfs; idx++) {
+		v = mbx_rd32(hw, VF2PF_COUNTER(mbx, idx));
+		hw->mbx.vf_req[idx] = v & 0xffff;
+		hw->mbx.vf_ack[idx] = (v >> 16) & 0xffff;
+		mbx_wr32(hw, PF2VF_MBOX_CTRL(mbx, idx), 0);
+	}
+	v = mbx_rd32(hw, FW2PF_COUNTER(mbx));
+	hw->mbx.fw_req = v & 0xffff;
+	hw->mbx.fw_ack = (v >> 16) & 0xffff;
+
+	mbx_wr32(hw, PF2FW_MBOX_CTRL(mbx), 0);
+
+	if (PF_VF_MBOX_MASK_LO(mbx))
+		mbx_wr32(hw, PF_VF_MBOX_MASK_LO(mbx), 0);
+	if (PF_VF_MBOX_MASK_HI(mbx))
+		mbx_wr32(hw, PF_VF_MBOX_MASK_HI(mbx), 0);
+
+	mbx_wr32(hw, FW_PF_MBOX_MASK(mbx), 0xffff0000);
+}
+
+/**
+ * mucse_mbx_configure_pf - configure mbx to use nr_vec interrupt
+ * @hw: Pointer to the HW structure
+ * @nr_vec: Vector number for mbx
+ * @enable: TRUE for enable, FALSE for disable
+ *
+ * This function configure mbx to use interrupt nr_vec.
+ **/
+static int mucse_mbx_configure_pf(struct mucse_hw *hw, int nr_vec,
+				  bool enable)
+{
+	int idx = 0;
+	u32 v;
+	struct mucse_mbx_info *mbx = &hw->mbx;
+
+	/* if pcie off, nothing todo */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+	if (enable) {
+		for (idx = 0; idx < hw->max_vfs; idx++) {
+			v = mbx_rd32(hw, VF2PF_COUNTER(mbx, idx));
+			hw->mbx.vf_req[idx] = v & 0xffff;
+			hw->mbx.vf_ack[idx] = (v >> 16) & 0xffff;
+
+			mbx_wr32(hw, PF2VF_MBOX_CTRL(mbx, idx), 0);
+		}
+		v = mbx_rd32(hw, FW2PF_COUNTER(mbx));
+		hw->mbx.fw_req = v & 0xffff;
+		hw->mbx.fw_ack = (v >> 16) & 0xffff;
+		mbx_wr32(hw, PF2FW_MBOX_CTRL(mbx), 0);
+
+		for (idx = 0; idx < hw->max_vfs; idx++) {
+			mbx_wr32(hw, VF2PF_MBOX_VEC(mbx, idx),
+				 nr_vec);
+		/* vf to pf req interrupt */
+		}
+
+		if (PF_VF_MBOX_MASK_LO(mbx))
+			mbx_wr32(hw, PF_VF_MBOX_MASK_LO(mbx), 0);
+		/* allow vf to vectors */
+
+		if (PF_VF_MBOX_MASK_HI(mbx))
+			mbx_wr32(hw, PF_VF_MBOX_MASK_HI(mbx), 0);
+		/* enable irq */
+		/* bind fw mbx to irq */
+		mbx_wr32(hw, FW2PF_MBOX_VEC(mbx), nr_vec);
+		/* allow CM3FW to PF MBX IRQ */
+		mbx_wr32(hw, FW_PF_MBOX_MASK(mbx), 0xffff0000);
+	} else {
+		if (PF_VF_MBOX_MASK_LO(mbx))
+			mbx_wr32(hw, PF_VF_MBOX_MASK_LO(mbx),
+				 0xffffffff);
+		/* disable irq */
+		if (PF_VF_MBOX_MASK_HI(mbx))
+			mbx_wr32(hw, PF_VF_MBOX_MASK_HI(mbx),
+				 0xffffffff);
+
+		/* disable CM3FW to PF MBX IRQ */
+		mbx_wr32(hw, FW_PF_MBOX_MASK(mbx), 0xfffffffe);
+
+		/* reset vf->pf status/ctrl */
+		for (idx = 0; idx < hw->max_vfs; idx++)
+			mbx_wr32(hw, PF2VF_MBOX_CTRL(mbx, idx), 0);
+		/* reset pf->cm3 ctrl */
+		mbx_wr32(hw, PF2FW_MBOX_CTRL(mbx), 0);
+		/* used to sync link status */
+		mbx_wr32(hw, RNPGBE_DMA_DUMY, 0);
+	}
+	return 0;
+}
+
+/**
+ * mucse_init_mbx_params_pf - set initial values for pf mailbox
+ * @hw: pointer to the HW structure
+ *
+ * Initializes the hw->mbx struct to correct values for pf mailbox
+ */
+static s32 mucse_init_mbx_params_pf(struct mucse_hw *hw)
+{
+	struct mucse_mbx_info *mbx = &hw->mbx;
+
+	mbx->usec_delay = 100;
+	mbx->timeout = (4 * 1000 * 1000) / mbx->usec_delay;
+	mbx->stats.msgs_tx = 0;
+	mbx->stats.msgs_rx = 0;
+	mbx->stats.reqs = 0;
+	mbx->stats.acks = 0;
+	mbx->stats.rsts = 0;
+	mbx->size = MUCSE_VFMAILBOX_SIZE;
+
+	mutex_init(&mbx->lock);
+	mucse_mbx_reset(hw);
+	return 0;
+}
+
+struct mucse_mbx_operations mucse_mbx_ops_generic = {
+	.init_params = mucse_init_mbx_params_pf,
+	.read = mucse_read_mbx_pf,
+	.write = mucse_write_mbx_pf,
+	.read_posted = mucse_read_posted_mbx,
+	.write_posted = mucse_write_posted_mbx,
+	.check_for_msg = mucse_check_for_msg_pf,
+	.check_for_ack = mucse_check_for_ack_pf,
+	.configure = mucse_mbx_configure_pf,
+};
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
new file mode 100644
index 000000000000..05231c76718e
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#ifndef _RNPGBE_MBX_H
+#define _RNPGBE_MBX_H
+
+#include "rnpgbe.h"
+#define MUCSE_ERR_MBX -100
+/* 14 words */
+#define MUCSE_VFMAILBOX_SIZE 14
+/* ================ PF <--> VF mailbox ================ */
+#define SHARE_MEM_BYTES 64
+static inline u32 PF_VF_SHM(struct mucse_mbx_info *mbx, int vf)
+{
+	return mbx->pf_vf_shm_base + mbx->mbx_mem_size * vf;
+}
+
+#define PF2VF_COUNTER(mbx, vf) (PF_VF_SHM(mbx, vf) + 0)
+#define VF2PF_COUNTER(mbx, vf) (PF_VF_SHM(mbx, vf) + 4)
+#define PF_VF_SHM_DATA(mbx, vf) (PF_VF_SHM(mbx, vf) + 8)
+#define VF2PF_MBOX_VEC(mbx, vf) ((mbx)->vf2pf_mbox_vec_base + 4 * (vf))
+#define PF2VF_MBOX_CTRL(mbx, vf) ((mbx)->pf2vf_mbox_ctrl_base + 4 * (vf))
+#define PF_VF_MBOX_MASK_LO(mbx) ((mbx)->pf_vf_mbox_mask_lo)
+#define PF_VF_MBOX_MASK_HI(mbx) ((mbx)->pf_vf_mbox_mask_hi)
+/* ================ PF <--> FW mailbox ================ */
+#define FW_PF_SHM(mbx) ((mbx)->fw_pf_shm_base)
+#define FW2PF_COUNTER(mbx) (FW_PF_SHM(mbx) + 0)
+#define PF2FW_COUNTER(mbx) (FW_PF_SHM(mbx) + 4)
+#define FW_PF_SHM_DATA(mbx) (FW_PF_SHM(mbx) + 8)
+#define FW2PF_MBOX_VEC(mbx) ((mbx)->fw2pf_mbox_vec)
+#define PF2FW_MBOX_CTRL(mbx) ((mbx)->pf2fw_mbox_ctrl)
+#define FW_PF_MBOX_MASK(mbx) ((mbx)->fw_pf_mbox_mask)
+#define MBOX_CTRL_REQ BIT(0) /* WO */
+#define MBOX_CTRL_PF_HOLD_SHM (BIT(3)) /* VF:RO, PF:WR */
+#define MBOX_IRQ_EN 0
+#define MBOX_IRQ_DISABLE 1
+#define mbx_rd32(hw, reg) rnpgbe_rd_reg((hw)->hw_addr + (reg))
+#define mbx_wr32(hw, reg, val) rnpgbe_wr_reg((hw)->hw_addr + (reg), (val))
+
+extern struct mucse_mbx_operations mucse_mbx_ops_generic;
+
+s32 mucse_read_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
+		   enum MBX_ID mbx_id);
+s32 mucse_write_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
+		    enum MBX_ID mbx_id);
+s32 mucse_check_for_msg(struct mucse_hw *hw, enum MBX_ID mbx_id);
+s32 mucse_check_for_ack(struct mucse_hw *hw, enum MBX_ID mbx_id);
+#endif /* _RNPGBE_MBX_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw ops support
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (2 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 03/15] net: rnpgbe: Add basic mbx ops support Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-04 18:25   ` Andrew Lunn
  2025-07-03  1:48 ` [PATCH 05/15] net: rnpgbe: Add download firmware for n210 chip Dong Yibo
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize get hw capability from mbx_fw ops.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/Makefile    |   3 +-
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |   8 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |   8 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h    |   3 +-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c | 141 +++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h | 530 ++++++++++++++++++
 6 files changed, 691 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h

diff --git a/drivers/net/ethernet/mucse/rnpgbe/Makefile b/drivers/net/ethernet/mucse/rnpgbe/Makefile
index 41177103b50c..fd455cb111a9 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/Makefile
+++ b/drivers/net/ethernet/mucse/rnpgbe/Makefile
@@ -7,4 +7,5 @@
 obj-$(CONFIG_MGBE) += rnpgbe.o
 rnpgbe-objs := rnpgbe_main.o \
 	       rnpgbe_chip.o \
-	       rnpgbe_mbx.o
+	       rnpgbe_mbx.o \
+	       rnpgbe_mbx_fw.o
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 4cafab16f5bf..fd1610318c75 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -155,6 +155,14 @@ struct mucse_hw {
 	u16 vendor_id;
 	u16 subsystem_device_id;
 	u16 subsystem_vendor_id;
+	u32 wol;
+	u32 wol_en;
+	u32 fw_version;
+	u32 axi_mhz;
+	u32 bd_uid;
+	int ncsi_en;
+	int force_en;
+	int force_cap;
 	int max_vfs;
 	int max_vfs_noari;
 	enum rnpgbe_hw_type hw_type;
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index e125b609ba09..b701b42b7c42 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -9,6 +9,7 @@
 #include <linux/etherdevice.h>
 
 #include "rnpgbe.h"
+#include "rnpgbe_mbx_fw.h"
 
 char rnpgbe_driver_name[] = "rnpgbe";
 static const char rnpgbe_driver_string[] =
@@ -137,6 +138,13 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	ii->get_invariants(hw);
 	hw->mbx.ops.init_params(hw);
 
+	if (mucse_mbx_get_capability(hw)) {
+		dev_err(&pdev->dev,
+			"mucse_mbx_get_capability failed!\n");
+		err = -EIO;
+		goto err_free_net;
+	}
+
 	return 0;
 
 err_free_net:
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
index 05231c76718e..2040b86f4cad 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+/* Copyright(c) 2022 - 2025 Mucse Corporation. */
 
 #ifndef _RNPGBE_MBX_H
 #define _RNPGBE_MBX_H
@@ -36,6 +36,7 @@ static inline u32 PF_VF_SHM(struct mucse_mbx_info *mbx, int vf)
 #define MBOX_IRQ_DISABLE 1
 #define mbx_rd32(hw, reg) rnpgbe_rd_reg((hw)->hw_addr + (reg))
 #define mbx_wr32(hw, reg, val) rnpgbe_wr_reg((hw)->hw_addr + (reg), (val))
+#define hw_wr32(hw, reg, val) rnpgbe_wr_reg((hw)->hw_addr + (reg), (val))
 
 extern struct mucse_mbx_operations mucse_mbx_ops_generic;
 
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
new file mode 100644
index 000000000000..7fdfccdba80b
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
@@ -0,0 +1,141 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#include <linux/pci.h>
+
+#include "rnpgbe_mbx_fw.h"
+
+/**
+ * mucse_fw_send_cmd_wait - Send cmd req and wait for response
+ * @hw: Pointer to the HW structure
+ * @req: Pointer to the cmd req structure
+ * @reply: Pointer to the fw reply structure
+ *
+ * mucse_fw_send_cmd_wait sends req to pf-cm3 mailbox and wait
+ * reply from fw.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int mucse_fw_send_cmd_wait(struct mucse_hw *hw,
+				  struct mbx_fw_cmd_req *req,
+				  struct mbx_fw_cmd_reply *reply)
+{
+	int err;
+	int retry_cnt = 3;
+
+	if (!hw || !req || !reply || !hw->mbx.ops.read_posted)
+		return -EINVAL;
+
+	/* if pcie off, nothing todo */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+
+	if (mutex_lock_interruptible(&hw->mbx.lock))
+		return -EAGAIN;
+
+	err = hw->mbx.ops.write_posted(hw, (u32 *)req,
+				       L_WD(req->datalen + MBX_REQ_HDR_LEN),
+				       MBX_FW);
+	if (err) {
+		mutex_unlock(&hw->mbx.lock);
+		return err;
+	}
+
+retry:
+	retry_cnt--;
+	if (retry_cnt < 0)
+		return -EIO;
+
+	err = hw->mbx.ops.read_posted(hw, (u32 *)reply,
+				      L_WD(sizeof(*reply)),
+				      MBX_FW);
+	if (err) {
+		mutex_unlock(&hw->mbx.lock);
+		return err;
+	}
+
+	if (reply->opcode != req->opcode)
+		goto retry;
+
+	mutex_unlock(&hw->mbx.lock);
+
+	if (reply->error_code)
+		return -reply->error_code;
+
+	return 0;
+}
+
+/**
+ * mucse_fw_get_capablity - Get hw abilities from fw
+ * @hw: Pointer to the HW structure
+ * @abil: Pointer to the hw_abilities structure
+ *
+ * mucse_fw_get_capablity tries to get hw abilities from
+ * hw.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int mucse_fw_get_capablity(struct mucse_hw *hw,
+				  struct hw_abilities *abil)
+{
+	int err = 0;
+	struct mbx_fw_cmd_req req;
+	struct mbx_fw_cmd_reply reply;
+
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+	build_phy_abalities_req(&req, &req);
+	err = mucse_fw_send_cmd_wait(hw, &req, &reply);
+	if (err == 0)
+		memcpy(abil, &reply.hw_abilities, sizeof(*abil));
+
+	return err;
+}
+
+/**
+ * mucse_mbx_get_capability - Get hw abilities from fw
+ * @hw: Pointer to the HW structure
+ *
+ * mucse_mbx_get_capability tries to some capabities from
+ * hw. Many retrys will do if it is failed.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int mucse_mbx_get_capability(struct mucse_hw *hw)
+{
+	int err = 0;
+	struct hw_abilities ablity;
+	int try_cnt = 3;
+
+	memset(&ablity, 0, sizeof(ablity));
+
+	while (try_cnt--) {
+		err = mucse_fw_get_capablity(hw, &ablity);
+		if (err == 0) {
+			hw->ncsi_en = (ablity.nic_mode & 0x4) ? 1 : 0;
+			hw->pfvfnum = ablity.pfnum;
+			hw->fw_version = ablity.fw_version;
+			hw->axi_mhz = ablity.axi_mhz;
+			hw->bd_uid = ablity.bd_uid;
+
+			if (hw->fw_version >= 0x0001012C) {
+				/* this version can get wol_en from hw */
+				hw->wol = ablity.wol_status & 0xff;
+				hw->wol_en = ablity.wol_status & 0x100;
+			} else {
+				/* other version only pf0 or ncsi can wol */
+				hw->wol = ablity.wol_status & 0xff;
+				if (hw->ncsi_en || !ablity.pfnum)
+					hw->wol_en = 1;
+			}
+			/* 0.1.5.0 can get force status from fw */
+			if (hw->fw_version >= 0x00010500) {
+				hw->force_en = ablity.e.force_down_en;
+				hw->force_cap = 1;
+			}
+			return 0;
+		}
+	}
+
+	return -EIO;
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
new file mode 100644
index 000000000000..c5f2c3ff4068
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
@@ -0,0 +1,530 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#ifndef _RNPGBE_MBX_FW_H
+#define _RNPGBE_MBX_FW_H
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/wait.h>
+
+#include "rnpgbe.h"
+
+#define MBX_REQ_HDR_LEN 24
+#define L_WD(x) ((x) / 4)
+
+struct mbx_fw_cmd_reply;
+typedef void (*cookie_cb)(struct mbx_fw_cmd_reply *reply, void *priv);
+
+struct mbx_req_cookie {
+	int magic;
+#define COOKIE_MAGIC 0xCE
+	cookie_cb cb;
+	int timeout_jiffes;
+	int errcode;
+	wait_queue_head_t wait;
+	int done;
+	int priv_len;
+	char priv[64];
+};
+
+enum MUCSE_FW_CMD {
+	GET_VERSION = 0x0001,
+	READ_REG = 0xFF03,
+	WRITE_REG = 0xFF04,
+	MODIFY_REG = 0xFF07,
+	IFUP_DOWN = 0x0800,
+	SEND_TO_PF = 0x0801,
+	SEND_TO_VF = 0x0802,
+	DRIVER_INSMOD = 0x0803,
+	SYSTEM_SUSPUSE = 0x0804,
+	SYSTEM_FORCE = 0x0805,
+	GET_PHY_ABALITY = 0x0601,
+	GET_MAC_ADDRES = 0x0602,
+	RESET_PHY = 0x0603,
+	LED_SET = 0x0604,
+	GET_LINK_STATUS = 0x0607,
+	LINK_STATUS_EVENT = 0x0608,
+	SET_LANE_FUN = 0x0609,
+	GET_LANE_STATUS = 0x0610,
+	SFP_SPEED_CHANGED_EVENT = 0x0611,
+	SET_EVENT_MASK = 0x0613,
+	SET_LOOPBACK_MODE = 0x0618,
+	SET_PHY_REG = 0x0628,
+	GET_PHY_REG = 0x0629,
+	PHY_LINK_SET = 0x0630,
+	GET_PHY_STATISTICS = 0x0631,
+	PHY_PAUSE_SET = 0x0632,
+	PHY_PAUSE_GET = 0x0633,
+	PHY_EEE_SET = 0x0636,
+	PHY_EEE_GET = 0x0637,
+	SFP_MODULE_READ = 0x0900,
+	SFP_MODULE_WRITE = 0x0901,
+	FW_UPDATE = 0x0700,
+	FW_MAINTAIN = 0x0701,
+	FW_UPDATE_GBE = 0x0702,
+	WOL_EN = 0x0910,
+	GET_DUMP = 0x0a00,
+	SET_DUMP = 0x0a10,
+	GET_TEMP = 0x0a11,
+	SET_WOL = 0x0a12,
+	SET_TEST_MODE = 0x0a13,
+	SHOW_TX_STAMP = 0x0a14,
+	LLDP_TX_CTRL = 0x0a15,
+};
+
+struct hw_abilities {
+	u8 link_stat;
+	u8 lane_mask;
+	int speed;
+	u16 phy_type;
+	u16 nic_mode;
+	u16 pfnum;
+	u32 fw_version;
+	u32 axi_mhz;
+	union {
+		u8 port_id[4];
+		u32 port_ids;
+	};
+	u32 bd_uid;
+	int phy_id;
+	int wol_status;
+	union {
+		int ext_ablity;
+		struct {
+			u32 valid : 1; /* 0 */
+			u32 wol_en : 1; /* 1 */
+			u32 pci_preset_runtime_en : 1; /* 2 */
+			u32 smbus_en : 1; /* 3 */
+			u32 ncsi_en : 1; /* 4 */
+			u32 rpu_en : 1; /* 5 */
+			u32 v2 : 1; /* 6 */
+			u32 pxe_en : 1; /* 7 */
+			u32 mctp_en : 1; /* 8 */
+			u32 yt8614 : 1; /* 9 */
+			u32 pci_ext_reset : 1; /* 10 */
+			u32 rpu_availble : 1; /* 11 */
+			u32 fw_lldp_ablity : 1; /* 12 */
+			u32 lldp_enabled : 1; /* 13 */
+			u32 only_1g : 1; /* 14 */
+			u32 force_down_en: 1;
+		} e;
+	};
+} __packed;
+
+struct phy_pause_data {
+	u32 pause_mode;
+};
+
+struct lane_stat_data {
+	u8 nr_lane;
+	u8 pci_gen : 4;
+	u8 pci_lanes : 4;
+	u8 pma_type;
+	u8 phy_type;
+	u16 linkup : 1;
+	u16 duplex : 1;
+	u16 autoneg : 1;
+	u16 fec : 1;
+	u16 an : 1;
+	u16 link_traing : 1;
+	u16 media_availble : 1;
+	u16 is_sgmii : 1;
+	u16 link_fault : 4;
+#define LINK_LINK_FAULT BIT(0)
+#define LINK_TX_FAULT BIT(1)
+#define LINK_RX_FAULT BIT(2)
+#define LINK_REMOTE_FAULT BIT(3)
+	u16 is_backplane : 1;
+	u16 tp_mdx : 2;
+	union {
+		u8 phy_addr;
+		struct {
+			u8 mod_abs : 1;
+			u8 fault : 1;
+			u8 tx_dis : 1;
+			u8 los : 1;
+		} sfp;
+	};
+	u8 sfp_connector;
+	u32 speed;
+	u32 si_main;
+	u32 si_pre;
+	u32 si_post;
+	u32 si_tx_boost;
+	u32 supported_link;
+	u32 phy_id;
+	u32 advertised_link;
+} __packed;
+
+struct yt_phy_statistics {
+	u32 pkg_ib_valid; /* rx crc good and length 64-1518 */
+	u32 pkg_ib_os_good; /* rx crc good and length >1518 */
+	u32 pkg_ib_us_good; /* rx crc good and length <64 */
+	u16 pkg_ib_err; /* rx crc wrong and length 64-1518 */
+	u16 pkg_ib_os_bad; /* rx crc wrong and length >1518 */
+	u16 pkg_ib_frag; /* rx crc wrong and length <64 */
+	u16 pkg_ib_nosfd; /* rx sfd missed */
+	u32 pkg_ob_valid; /* tx crc good and length 64-1518 */
+	u32 pkg_ob_os_good; /* tx crc good and length >1518 */
+	u32 pkg_ob_us_good; /* tx crc good and length <64 */
+	u16 pkg_ob_err; /* tx crc wrong and length 64-1518 */
+	u16 pkg_ob_os_bad; /* tx crc wrong and length >1518 */
+	u16 pkg_ob_frag; /* tx crc wrong and length <64 */
+	u16 pkg_ob_nosfd; /* tx sfd missed */
+} __packed;
+
+struct phy_statistics {
+	union {
+		struct yt_phy_statistics yt;
+	};
+} __packed;
+
+struct port_stat {
+	u8 phyid;
+	u8 duplex : 1;
+	u8 autoneg : 1;
+	u8 fec : 1;
+	u16 speed;
+	u16 pause : 4;
+	u16 local_eee : 3;
+	u16 partner_eee : 3;
+	u16 tp_mdx : 2;
+	u16 lldp_status : 1;
+	u16 revs : 3;
+} __packed;
+
+#define FLAGS_DD BIT(0) /* driver clear 0, FW must set 1 */
+/* driver clear 0, FW must set only if it reporting an error */
+#define FLAGS_ERR BIT(2)
+
+/* req is little endian. bigendian should be conserened */
+struct mbx_fw_cmd_req {
+	u16 flags; /* 0-1 */
+	u16 opcode; /* 2-3 enum GENERIC_CMD */
+	u16 datalen; /* 4-5 */
+	u16 ret_value; /* 6-7 */
+	union {
+		struct {
+			u32 cookie_lo; /* 8-11 */
+			u32 cookie_hi; /* 12-15 */
+		};
+
+		void *cookie;
+	};
+	u32 reply_lo; /* 16-19 5dw */
+	u32 reply_hi; /* 20-23 */
+	union {
+		u8 data[32];
+		struct {
+			u32 addr;
+			u32 bytes;
+		} r_reg;
+
+		struct {
+			u32 addr;
+			u32 bytes;
+			u32 data[4];
+		} w_reg;
+
+		struct {
+			u32 lanes;
+		} ptp;
+
+		struct {
+			u32 lane;
+			u32 up;
+		} ifup;
+
+		struct {
+			u32 sec;
+			u32 nanosec;
+
+		} tstamps;
+
+		struct {
+			u32 lane;
+			u32 status;
+		} ifinsmod;
+
+		struct {
+			u32 lane;
+			u32 status;
+		} ifforce;
+
+		struct {
+			u32 lane;
+			u32 status;
+		} ifsuspuse;
+
+		struct {
+			int nr_lane;
+		} get_lane_st;
+
+		struct {
+			int nr_lane;
+			u32 func;
+#define LANE_FUN_AN 0
+#define LANE_FUN_LINK_TRAING 1
+#define LANE_FUN_FEC 2
+#define LANE_FUN_SI 3
+#define LANE_FUN_SFP_TX_DISABLE 4
+#define LANE_FUN_PCI_LANE 5
+#define LANE_FUN_PRBS 6
+#define LANE_FUN_SPEED_CHANGE 7
+			u32 value0;
+			u32 value1;
+			u32 value2;
+			u32 value3;
+		} set_lane_fun;
+
+		struct {
+			u32 flag;
+			int nr_lane;
+		} set_dump;
+
+		struct {
+			u32 lane;
+			u32 enable;
+		} wol;
+
+		struct {
+			u32 lane;
+			u32 mode;
+		} gephy_test;
+
+		struct {
+			u32 lane;
+			u32 op;
+			u32 enable;
+			u32 inteval;
+		} lldp_tx;
+
+		struct {
+			u32 bytes;
+			int nr_lane;
+			u32 bin_offset;
+			u32 no_use;
+		} get_dump;
+
+		struct {
+			int nr_lane;
+			u32 value;
+#define LED_IDENTIFY_INACTIVE 0
+#define LED_IDENTIFY_ACTIVE 1
+#define LED_IDENTIFY_ON 2
+#define LED_IDENTIFY_OFF 3
+		} led_set;
+
+		struct {
+			u32 addr;
+			u32 data;
+			u32 mask;
+		} modify_reg;
+
+		struct {
+			u32 adv_speed_mask;
+			u32 autoneg;
+			u32 speed;
+			u32 duplex;
+			int nr_lane;
+			u32 tp_mdix_ctrl;
+		} phy_link_set;
+
+		struct {
+			u32 pause_mode;
+			int nr_lane;
+		} phy_pause_set;
+
+		struct {
+			u32 pause_mode;
+			int nr_lane;
+		} phy_pause_get;
+
+		struct {
+			u32 local_eee;
+			u32 tx_lpi_timer;
+			int nr_lane;
+		} phy_eee_set;
+
+		struct {
+			int nr_lane;
+			u32 sfp_adr; /* 0xa0 or 0xa2 */
+			u32 reg;
+			u32 cnt;
+		} sfp_read;
+
+		struct {
+			int nr_lane;
+			u32 sfp_adr; /* 0xa0 or 0xa2 */
+			u32 reg;
+			u32 val;
+		} sfp_write;
+
+		struct {
+			int nr_lane; /* 0-3 */
+		} get_linkstat;
+
+		struct {
+			u16 changed_lanes;
+			u16 lane_status;
+			u32 port_st_magic;
+#define SPEED_VALID_MAGIC 0xa4a6a8a9
+			struct port_stat st[4];
+		} link_stat; /* FW->RC */
+
+		struct {
+			u16 enable_stat;
+			u16 event_mask;
+		} stat_event_mask;
+
+		struct {
+			u32 cmd;
+			u32 arg0;
+			u32 req_bytes;
+			u32 reply_bytes;
+			u32 ddr_lo;
+			u32 ddr_hi;
+		} maintain;
+
+		struct { /* set phy register */
+			u8 phy_interface;
+			union {
+				u8 page_num;
+				u8 external_phy_addr;
+			};
+			u32 phy_reg_addr;
+			u32 phy_w_data;
+			u32 reg_addr;
+			u32 w_data;
+			/* 1 = ignore page_num, use last QSFP */
+			u8 recall_qsfp_page : 1;
+			/* page value */
+			/* 0 = use page_num for QSFP */
+			u8 nr_lane;
+		} set_phy_reg;
+
+		struct {
+			int lane_mask;
+			u32 pfvf_num;
+		} get_mac_addr;
+
+		struct {
+			u8 phy_interface;
+			union {
+				u8 page_num;
+				u8 external_phy_addr;
+			};
+			int phy_reg_addr;
+			u8 nr_lane;
+		} get_phy_reg;
+
+		struct {
+			int nr_lane;
+		} phy_statistics;
+
+		struct {
+			u8 paration;
+			u32 bytes;
+			u32 bin_phy_lo;
+			u32 bin_phy_hi;
+		} fw_update;
+	};
+} __packed;
+
+#define EEE_1000BT BIT(2)
+#define EEE_100BT BIT(1)
+
+struct rnpgbe_eee_cap {
+	u32 local_capability;
+	u32 local_eee;
+	u32 partner_eee;
+};
+
+/* firmware -> driver */
+struct mbx_fw_cmd_reply {
+	/* fw must set: DD, CMP, Error(if error), copy value */
+	u16 flags;
+	/* from command: LB,RD,VFC,BUF,SI,EI,FE */
+	u16 opcode; /* 2-3: copy from req */
+	u16 error_code; /* 4-5: 0 if no error */
+	u16 datalen; /* 6-7: */
+	union {
+		struct {
+			u32 cookie_lo; /* 8-11: */
+			u32 cookie_hi; /* 12-15: */
+		};
+		void *cookie;
+	};
+	/* ===== data ==== [16-64] */
+	union {
+		u8 data[40];
+
+		struct version {
+			u32 major;
+			u32 sub;
+			u32 modify;
+		} version;
+
+		struct {
+			u32 value[4];
+		} r_reg;
+
+		struct {
+			u32 new_value;
+		} modify_reg;
+
+		struct get_temp {
+			int temp;
+			int volatage;
+		} get_temp;
+
+		struct {
+#define MBX_SFP_READ_MAX_CNT 32
+			u8 value[MBX_SFP_READ_MAX_CNT];
+		} sfp_read;
+
+		struct mac_addr {
+			int lanes;
+			struct _addr {
+				/*
+				 * for macaddr:01:02:03:04:05:06
+				 * mac-hi=0x01020304 mac-lo=0x05060000
+				 */
+				u8 mac[8];
+			} addrs[4];
+		} mac_addr;
+
+		struct get_dump_reply {
+			u32 flags;
+			u32 version;
+			u32 bytes;
+			u32 data[4];
+		} get_dump;
+
+		struct get_lldp_reply {
+			u32 value;
+			u32 inteval;
+		} get_lldp;
+
+		struct rnpgbe_eee_cap phy_eee_abilities;
+		struct lane_stat_data lanestat;
+		struct hw_abilities hw_abilities;
+		struct phy_statistics phy_statistics;
+	};
+} __packed;
+
+static inline void build_phy_abalities_req(struct mbx_fw_cmd_req *req,
+					   void *cookie)
+{
+	req->flags = 0;
+	req->opcode = GET_PHY_ABALITY;
+	req->datalen = 0;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+	req->cookie = cookie;
+}
+
+int mucse_mbx_get_capability(struct mucse_hw *hw);
+
+#endif /* _RNPGBE_MBX_FW_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 05/15] net: rnpgbe: Add download firmware for n210 chip
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (3 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw " Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-04 18:33   ` Andrew Lunn
  2025-07-03  1:48 ` [PATCH 06/15] net: rnpgbe: Add some functions for hw->ops Dong Yibo
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize download fw function for n210 series.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/Makefile    |   3 +-
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |   3 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  94 ++++++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_sfc.c    | 236 ++++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_sfc.h    |  30 +++
 5 files changed, 364 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.h

diff --git a/drivers/net/ethernet/mucse/rnpgbe/Makefile b/drivers/net/ethernet/mucse/rnpgbe/Makefile
index fd455cb111a9..db7d3a8140b2 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/Makefile
+++ b/drivers/net/ethernet/mucse/rnpgbe/Makefile
@@ -8,4 +8,5 @@ obj-$(CONFIG_MGBE) += rnpgbe.o
 rnpgbe-objs := rnpgbe_main.o \
 	       rnpgbe_chip.o \
 	       rnpgbe_mbx.o \
-	       rnpgbe_mbx_fw.o
+	       rnpgbe_mbx_fw.o \
+	       rnpgbe_sfc.o
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index fd1610318c75..4e07d39d55ba 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -21,6 +21,7 @@ enum rnpgbe_hw_type {
 	rnpgbe_hw_n500 = 0,
 	rnpgbe_hw_n210,
 	rnpgbe_hw_n210L,
+	rnpgbe_hw_unknow,
 };
 
 struct mucse_dma_info {
@@ -199,6 +200,8 @@ struct mucse {
 	struct mucse_hw hw;
 	/* board number */
 	u16 bd_number;
+	u32 flags2;
+#define M_FLAG2_NO_NET_REG ((u32)(1 << 0))
 
 	char name[60];
 };
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index b701b42b7c42..a7b8eb53cd69 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -7,9 +7,11 @@
 #include <linux/netdevice.h>
 #include <linux/string.h>
 #include <linux/etherdevice.h>
+#include <linux/firmware.h>
 
 #include "rnpgbe.h"
 #include "rnpgbe_mbx_fw.h"
+#include "rnpgbe_sfc.h"
 
 char rnpgbe_driver_name[] = "rnpgbe";
 static const char rnpgbe_driver_string[] =
@@ -42,6 +44,56 @@ static struct pci_device_id rnpgbe_pci_tbl[] = {
 	{0, },
 };
 
+/**
+ * rnpgbe_check_fw_from_flash - Check chip-id and bin-id
+ * @hw: hardware structure
+ * @data: data from bin files
+ *
+ * rnpgbe_check_fw_from_flash tries to match chip-id and bin-id
+ *
+ *
+ * Returns 0 on mactch, negative on failure
+ **/
+static int rnpgbe_check_fw_from_flash(struct mucse_hw *hw, const u8 *data)
+{
+	u32 device_id;
+	int ret = 0;
+	u32 chip_data;
+	enum rnpgbe_hw_type hw_type = rnpgbe_hw_unknow;
+#define RNPGBE_BIN_HEADER (0xa55aa55a)
+	if (*((u32 *)(data)) != RNPGBE_BIN_HEADER)
+		return -EINVAL;
+
+	device_id = *((u16 *)data + 30);
+
+	/* if no device_id no check */
+	if (device_id == 0 || device_id == 0xffff)
+		return 0;
+
+#define CHIP_OFFSET (0x1f014 + 0x1000)
+	/* we should get hw_type from sfc-flash */
+	chip_data = ioread32(hw->hw_addr + CHIP_OFFSET);
+	if (chip_data == 0x11111111)
+		hw_type = rnpgbe_hw_n210;
+	else if (chip_data == 0x0)
+		hw_type = rnpgbe_hw_n210L;
+
+	switch (hw_type) {
+	case rnpgbe_hw_n210:
+		if (device_id != 0x8208)
+			ret = -1;
+		break;
+	case rnpgbe_hw_n210L:
+		if (device_id != 0x820a)
+			ret = -1;
+		break;
+	default:
+		ret = -1;
+	}
+
+	return ret;
+}
+
 /**
  * init_firmware_for_n210 - download firmware
  * @hw: hardware structure
@@ -53,7 +105,46 @@ static struct pci_device_id rnpgbe_pci_tbl[] = {
  **/
 static int init_firmware_for_n210(struct mucse_hw *hw)
 {
-	return 0;
+	char *filename = "n210_driver_update.bin";
+	const struct firmware *fw;
+	struct pci_dev *pdev = hw->pdev;
+	int rc = 0;
+	int err = 0;
+	struct mucse *mucse = (struct mucse *)hw->back;
+
+	rc = request_firmware(&fw, filename, &pdev->dev);
+
+	if (rc != 0) {
+		dev_err(&pdev->dev, "requesting firmware file failed\n");
+		return rc;
+	}
+
+	if (rnpgbe_check_fw_from_flash(hw, fw->data)) {
+		dev_info(&pdev->dev, "firmware type error\n");
+		release_firmware(fw);
+		return -EIO;
+	}
+	/* first protect off */
+	mucse_sfc_write_protect(hw);
+	err = mucse_sfc_flash_erase(hw, fw->size);
+	if (err) {
+		dev_err(&pdev->dev, "erase flash failed!");
+		goto out;
+	}
+
+	err = mucse_download_firmware(hw, fw->data, fw->size);
+	if (err) {
+		dev_err(&pdev->dev, "init firmware failed!");
+		goto out;
+	}
+	dev_info(&pdev->dev, "init firmware successfully.");
+	dev_info(&pdev->dev, "Please reboot.");
+
+out:
+	release_firmware(fw);
+	mucse->flags2 |= M_FLAG2_NO_NET_REG;
+
+	return err;
 }
 
 /**
@@ -97,6 +188,7 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	hw = &mucse->hw;
 	hw->back = mucse;
 	hw->hw_type = ii->hw_type;
+	hw->pdev = pdev;
 
 	switch (hw->hw_type) {
 	case rnpgbe_hw_n500:
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.c
new file mode 100644
index 000000000000..8e0919f5d47e
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.c
@@ -0,0 +1,236 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2022 - 2025 Mucse Corporation. */
+
+#include <linux/pci.h>
+
+#include "rnpgbe_sfc.h"
+#include "rnpgbe.h"
+
+static inline void mucse_sfc_command(u8 __iomem *hw_addr, u32 cmd)
+{
+	iowrite32(cmd, (hw_addr + 0x8));
+	iowrite32(1, (hw_addr + 0x0));
+	while (ioread32(hw_addr) != 0)
+		;
+}
+
+static inline void mucse_sfc_flash_write_disable(u8 __iomem *hw_addr)
+{
+	iowrite32(CMD_CYCLE(8), (hw_addr + 0x10));
+	iowrite32(WR_DATA_CYCLE(0), (hw_addr + 0x14));
+
+	mucse_sfc_command(hw_addr, CMD_WRITE_DISABLE);
+}
+
+static int32_t mucse_sfc_flash_wait_idle(u8 __iomem *hw_addr)
+{
+	int time = 0;
+	int ret = HAL_OK;
+
+	iowrite32(CMD_CYCLE(8), (hw_addr + 0x10));
+	iowrite32(RD_DATA_CYCLE(8), (hw_addr + 0x14));
+
+	while (1) {
+		mucse_sfc_command(hw_addr, CMD_READ_STATUS);
+		if ((ioread32(hw_addr + 0x4) & 0x1) == 0)
+			break;
+		time++;
+		if (time > 1000)
+			ret = HAL_FAIL;
+	}
+	return ret;
+}
+
+static inline void mucse_sfc_flash_write_enable(u8 __iomem *hw_addr)
+{
+	iowrite32(CMD_CYCLE(8), (hw_addr + 0x10));
+	iowrite32(0x1f, (hw_addr + 0x18));
+	iowrite32(0x100000, (hw_addr + 0x14));
+
+	mucse_sfc_command(hw_addr, CMD_WRITE_ENABLE);
+}
+
+static int mucse_sfc_flash_erase_sector(u8 __iomem *hw_addr,
+					u32 address)
+{
+	int ret = HAL_OK;
+
+	if (address >= RSP_FLASH_HIGH_16M_OFFSET)
+		return HAL_EINVAL;
+
+	if (address % 4096)
+		return HAL_EINVAL;
+
+	mucse_sfc_flash_write_enable(hw_addr);
+
+	iowrite32((CMD_CYCLE(8) | ADDR_CYCLE(24)), (hw_addr + 0x10));
+	iowrite32((RD_DATA_CYCLE(0) | WR_DATA_CYCLE(0)), (hw_addr + 0x14));
+	iowrite32(SFCADDR(address), (hw_addr + 0xc));
+	mucse_sfc_command(hw_addr, CMD_SECTOR_ERASE);
+	if (mucse_sfc_flash_wait_idle(hw_addr)) {
+		ret = HAL_FAIL;
+		goto failed;
+	}
+	mucse_sfc_flash_write_disable(hw_addr);
+
+failed:
+	return ret;
+}
+
+void mucse_sfc_write_protect(struct mucse_hw *hw)
+{
+	mucse_sfc_flash_write_enable(hw->hw_addr);
+
+	iowrite32(CMD_CYCLE(8), (hw->hw_addr + 0x10));
+	iowrite32(WR_DATA_CYCLE(8), (hw->hw_addr + 0x14));
+	iowrite32(0, (hw->hw_addr + 0x04));
+	mucse_sfc_command(hw->hw_addr, CMD_WRITE_STATUS);
+}
+
+/**
+ * mucse_sfc_flash_erase - Erase flash
+ * @hw: Hw structure
+ * @size: Data length
+ *
+ * mucse_sfc_flash_erase tries to erase sfc_flash
+ *
+ * Returns HAL_OK on success, negative on failure
+ **/
+int mucse_sfc_flash_erase(struct mucse_hw *hw, u32 size)
+{
+	u32 addr = SFC_MEM_BASE;
+	u32 i = 0;
+	u32 page_size = 0x1000;
+
+	size = ((size + (page_size - 1)) / page_size) * page_size;
+
+	addr = addr - SFC_MEM_BASE;
+
+	if (size == 0)
+		return HAL_EINVAL;
+
+	if ((addr + size) > RSP_FLASH_HIGH_16M_OFFSET)
+		return HAL_EINVAL;
+
+	if (addr % page_size)
+		return HAL_EINVAL;
+
+	if (size % page_size)
+		return HAL_EINVAL;
+	/* skip some info */
+	for (i = 0; i < size; i += page_size) {
+		if (i >= 0x1f000 && i < 0x20000)
+			continue;
+
+		mucse_sfc_flash_erase_sector(hw->hw_addr, (addr + i));
+	}
+
+	return HAL_OK;
+}
+
+/**
+ * mucse_download_firmware - Download data to chip
+ * @hw: Hw structure
+ * @data: Data to use
+ * @file_size: Data length
+ *
+ * mucse_download_firmware tries to download data to white-chip
+ * by hw_addr regs.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int mucse_download_firmware(struct mucse_hw *hw, const u8 *data,
+			    int file_size)
+{
+	struct device *dev = &hw->pdev->dev;
+	loff_t old_pos = 0;
+	loff_t pos = 0;
+	loff_t end_pos = file_size;
+	u32 rd_len = 0x1000;
+	int get_len = 0;
+	u32 iter = 0;
+	int err = 0;
+	u32 fw_off = 0;
+	u32 old_data = 0;
+	u32 new_data = 0;
+	char *buf = kzalloc(0x1000, GFP_KERNEL);
+
+	dev_info(dev, "initializing firmware, which will take some time.");
+	/* capy bin to bar */
+	while (pos < end_pos) {
+		/* we must skip header 4k */
+		if ((pos >= 0x1f000 && pos < 0x20000) || pos == 0) {
+			pos += rd_len;
+			continue;
+		}
+
+		old_pos = pos;
+		if (end_pos - pos < rd_len)
+			get_len = end_pos - pos;
+		else
+			get_len = rd_len;
+
+		memcpy(buf, data + pos, get_len);
+		if ((get_len < rd_len && ((old_pos + get_len) != end_pos)) ||
+		    get_len < 0) {
+			err = -EIO;
+			goto out;
+		}
+
+		for (iter = 0; iter < get_len; iter += 4) {
+			old_data = *((u32 *)(buf + iter));
+			fw_off = (u32)old_pos + iter + 0x1000;
+			iowrite32(old_data, (hw->hw_addr + fw_off));
+		}
+
+		if (pos == old_pos)
+			pos += get_len;
+	}
+	/* write first 4k header */
+	pos = 0;
+	old_pos = pos;
+	get_len = rd_len;
+	memcpy(buf, data + pos, get_len);
+
+	for (iter = 0; iter < get_len; iter += 4) {
+		old_data = *((u32 *)(buf + iter));
+		fw_off = (u32)old_pos + iter + 0x1000;
+		iowrite32(old_data, (hw->hw_addr + fw_off));
+	}
+	dev_info(dev, "Checking for firmware. Wait a moment, please.");
+	/* check */
+	pos = 0x0;
+	while (pos < end_pos) {
+		if (pos >= 0x1f000 && pos < 0x20000) {
+			pos += rd_len;
+			continue;
+		}
+
+		old_pos = pos;
+		if (end_pos - pos < rd_len)
+			get_len = end_pos - pos;
+		else
+			get_len = rd_len;
+
+		memcpy(buf, data + pos, get_len);
+		if ((get_len < rd_len && ((old_pos + get_len) != end_pos)) ||
+		    get_len < 0) {
+			err = -EIO;
+			goto out;
+		}
+
+		for (iter = 0; iter < get_len; iter += 4) {
+			old_data = *((u32 *)(buf + iter));
+			fw_off = (u32)old_pos + iter + 0x1000;
+			new_data = ioread32(hw->hw_addr + fw_off);
+			if (old_data != new_data)
+				err = -EIO;
+		}
+
+		if (pos == old_pos)
+			pos += get_len;
+	}
+out:
+	kfree(buf);
+	return err;
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.h
new file mode 100644
index 000000000000..9c381bcda960
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_sfc.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2022 - 2025 Mucse Corporation. */
+
+#ifndef _RNPGBE_SFC_H
+#define _RNPGBE_SFC_H
+
+#include "rnpgbe.h"
+
+/* Return value */
+#define HAL_OK (0)
+#define HAL_FAIL (-1)
+#define HAL_EINVAL (-3) /* Invalid argument */
+#define RSP_FLASH_HIGH_16M_OFFSET 0x1000000
+#define SFC_MEM_BASE 0x28000000
+#define CMD_WRITE_DISABLE 0x04000000
+#define CMD_READ_STATUS 0x05000000
+#define CMD_WRITE_STATUS 0x01000000
+#define CMD_WRITE_ENABLE 0x06000000
+#define CMD_SECTOR_ERASE 0x20000000
+#define SFCADDR(a) ((a) << 8)
+#define CMD_CYCLE(c) (((c) & 0xff) << 0)
+#define RD_DATA_CYCLE(c) (((c) & 0xff) << 8)
+#define WR_DATA_CYCLE(c) (((c) & 0xff) << 0)
+#define ADDR_CYCLE(c) (((c) & 0xff) << 16)
+
+void mucse_sfc_write_protect(struct mucse_hw *hw);
+int mucse_sfc_flash_erase(struct mucse_hw *hw, u32 size);
+int mucse_download_firmware(struct mucse_hw *hw, const u8 *data,
+			    int file_size);
+#endif /* _RNPGBE_SFC_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 06/15] net: rnpgbe: Add some functions for hw->ops
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (4 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 05/15] net: rnpgbe: Add download firmware for n210 chip Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 07/15] net: rnpgbe: Add get mac from hw Dong Yibo
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize functions (init, reset ...) to control chip.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  39 +++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   |  90 ++++++++
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h |  15 +-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  27 ++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h    |   5 +-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c | 209 ++++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h |  76 ++++++-
 7 files changed, 452 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 4e07d39d55ba..17297f9ff9c1 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -145,6 +145,25 @@ struct mucse_mbx_info {
 
 #include "rnpgbe_mbx.h"
 
+struct lldp_status {
+	int enable;
+	int inteval;
+};
+
+struct mucse_hw_operations {
+	int (*init_hw)(struct mucse_hw *hw);
+	int (*reset_hw)(struct mucse_hw *hw);
+	void (*start_hw)(struct mucse_hw *hw);
+	/* ops to fw */
+	void (*driver_status)(struct mucse_hw *hw, bool enable, int mode);
+};
+
+enum {
+	mucse_driver_insmod,
+	mucse_driver_suspuse,
+	mucse_driver_force_control_phy,
+};
+
 struct mucse_hw {
 	void *back;
 	u8 pfvfnum;
@@ -167,6 +186,7 @@ struct mucse_hw {
 	int max_vfs;
 	int max_vfs_noari;
 	enum rnpgbe_hw_type hw_type;
+	struct mucse_hw_operations ops;
 	struct mucse_dma_info dma;
 	struct mucse_eth_info eth;
 	struct mucse_mac_info mac;
@@ -191,7 +211,11 @@ struct mucse_hw {
 #define M_HW_FEATURE_EEE ((u32)(1 << 17))
 #define M_HW_SOFT_MASK_OTHER_IRQ ((u32)(1 << 18))
 	u32 feature_flags;
+	u32 driver_version;
 	u16 usecstocount;
+	int nr_lane;
+	struct lldp_status lldp_status;
+	int link;
 };
 
 struct mucse {
@@ -223,7 +247,18 @@ struct rnpgbe_info {
 #define PCI_DEVICE_ID_N210 0x8208
 #define PCI_DEVICE_ID_N210L 0x820a
 
-#define rnpgbe_rd_reg(reg) readl((void *)(reg))
-#define rnpgbe_wr_reg(reg, val) writel((val), (void *)(reg))
+#define m_rd_reg(reg) readl((void *)(reg))
+#define m_wr_reg(reg, val) writel((val), (void *)(reg))
+#define hw_wr32(hw, reg, val) m_wr_reg((hw)->hw_addr + (reg), (val))
+#define dma_wr32(dma, reg, val) m_wr_reg((dma)->dma_base_addr + (reg), (val))
+#define dma_rd32(dma, reg) m_rd_reg((dma)->dma_base_addr + (reg))
+#define eth_wr32(eth, reg, val) m_wr_reg((eth)->eth_base_addr + (reg), (val))
+#define eth_rd32(eth, reg) m_rd_reg((eth)->eth_base_addr + (reg))
+
+#define mucse_err(mucse, fmt, arg...) \
+	dev_err(&(mucse)->pdev->dev, fmt, ##arg)
+
+#define mucse_dbg(mucse, fmt, arg...) \
+	dev_dbg(&(mucse)->pdev->dev, fmt, ##arg)
 
 #endif /* _RNPGBE_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
index 08d082fa3066..c495a6f79fd0 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -7,6 +7,94 @@
 #include "rnpgbe.h"
 #include "rnpgbe_hw.h"
 #include "rnpgbe_mbx.h"
+#include "rnpgbe_mbx_fw.h"
+
+static int rnpgbe_init_hw_ops_n500(struct mucse_hw *hw)
+{
+	int status = 0;
+	/* Reset the hardware */
+	status = hw->ops.reset_hw(hw);
+	if (status == 0)
+		hw->ops.start_hw(hw);
+
+	return status;
+}
+
+/**
+ * rnpgbe_reset_hw_ops_n500 - Do a hardware reset
+ * @hw: hw information structure
+ *
+ * rnpgbe_reset_hw_ops_n500 calls fw to do a hardware
+ * reset, and cleans some regs to default.
+ *
+ **/
+static int rnpgbe_reset_hw_ops_n500(struct mucse_hw *hw)
+{
+	int i;
+	struct mucse_dma_info *dma = &hw->dma;
+	struct mucse_eth_info *eth = &hw->eth;
+	/* Call adapter stop to dma */
+	dma_wr32(dma, RNPGBE_DMA_AXI_EN, 0);
+	if (mucse_mbx_fw_reset_phy(hw))
+		return -EIO;
+
+	eth_wr32(eth, RNPGBE_ETH_ERR_MASK_VECTOR,
+		 RNPGBE_PKT_LEN_ERR | RNPGBE_HDR_LEN_ERR);
+	dma_wr32(dma, RNPGBE_DMA_RX_PROG_FULL_THRESH, 0xa);
+	for (i = 0; i < 12; i++)
+		m_wr_reg(hw->ring_msix_base + RING_VECTOR(i), 0);
+
+	hw->link = 0;
+
+	return 0;
+}
+
+/**
+ * rnpgbe_start_hw_ops_n500 - Setup hw to start
+ * @hw: hw information structure
+ *
+ * rnpgbe_start_hw_ops_n500 initializes default
+ * hw status, ready to start.
+ *
+ **/
+static void rnpgbe_start_hw_ops_n500(struct mucse_hw *hw)
+{
+	struct mucse_eth_info *eth = &hw->eth;
+	struct mucse_dma_info *dma = &hw->dma;
+	u32 value;
+
+	value = dma_rd32(dma, RNPGBE_DMA_DUMY);
+	value |= BIT(0);
+	dma_wr32(dma, RNPGBE_DMA_DUMY, value);
+	dma_wr32(dma, RNPGBE_DMA_CONFIG, DMA_VEB_BYPASS);
+	dma_wr32(dma, RNPGBE_DMA_AXI_EN, (RX_AXI_RW_EN | TX_AXI_RW_EN));
+	eth_wr32(eth, RNPGBE_ETH_BYPASS, 0);
+	eth_wr32(eth, RNPGBE_ETH_DEFAULT_RX_RING, 0);
+}
+
+static void rnpgbe_driver_status_hw_ops_n500(struct mucse_hw *hw,
+					     bool enable,
+					     int mode)
+{
+	switch (mode) {
+	case mucse_driver_insmod:
+		mucse_mbx_ifinsmod(hw, enable);
+		break;
+	case mucse_driver_suspuse:
+		mucse_mbx_ifsuspuse(hw, enable);
+		break;
+	case mucse_driver_force_control_phy:
+		mucse_mbx_ifforce_control_mac(hw, enable);
+		break;
+	}
+}
+
+static struct mucse_hw_operations hw_ops_n500 = {
+	.init_hw = &rnpgbe_init_hw_ops_n500,
+	.reset_hw = &rnpgbe_reset_hw_ops_n500,
+	.start_hw = &rnpgbe_start_hw_ops_n500,
+	.driver_status = &rnpgbe_driver_status_hw_ops_n500,
+};
 
 /**
  * rnpgbe_get_invariants_n500 - setup for hw info
@@ -80,7 +168,9 @@ static void rnpgbe_get_invariants_n500(struct mucse_hw *hw)
 		M_NET_FEATURE_STAG_FILTER | M_NET_FEATURE_STAG_OFFLOAD;
 	/* start the default ahz, update later*/
 	hw->usecstocount = 125;
+	hw->max_vfs_noari = 1;
 	hw->max_vfs = 7;
+	memcpy(&hw->ops, &hw_ops_n500, sizeof(hw->ops));
 }
 
 static void rnpgbe_get_invariants_n210(struct mucse_hw *hw)
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
index ff7bd9b21550..35e3cb77a38b 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
@@ -14,8 +14,21 @@
 #define RNPGBE_RING_BASE (0x1000)
 #define RNPGBE_MAC_BASE (0x20000)
 #define RNPGBE_ETH_BASE (0x10000)
-
+/* dma regs */
+#define DMA_VEB_BYPASS BIT(4)
+#define RNPGBE_DMA_CONFIG (0x0004)
 #define RNPGBE_DMA_DUMY (0x000c)
+#define RNPGBE_DMA_AXI_EN (0x0010)
+#define RX_AXI_RW_EN (0x03 << 0)
+#define TX_AXI_RW_EN (0x03 << 2)
+#define RNPGBE_DMA_RX_PROG_FULL_THRESH (0x00a0)
+#define RING_VECTOR(n) (0x04 * (n))
+/* eth regs */
+#define RNPGBE_ETH_BYPASS (0x8000)
+#define RNPGBE_ETH_ERR_MASK_VECTOR (0x8060)
+#define RNPGBE_ETH_DEFAULT_RX_RING (0x806c)
+#define RNPGBE_PKT_LEN_ERR (2)
+#define RNPGBE_HDR_LEN_ERR (1)
 /* chip resourse */
 #define RNPGBE_MAX_QUEUES (8)
 /* multicast control table */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index a7b8eb53cd69..e811e9624ead 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -17,6 +17,7 @@ char rnpgbe_driver_name[] = "rnpgbe";
 static const char rnpgbe_driver_string[] =
 	"mucse 1 Gigabit PCI Express Network Driver";
 #define DRV_VERSION "1.0.0"
+static u32 driver_version = 0x01000000;
 const char rnpgbe_driver_version[] = DRV_VERSION;
 static const char rnpgbe_copyright[] =
 	"Copyright (c) 2020-2025 mucse Corporation.";
@@ -147,6 +148,11 @@ static int init_firmware_for_n210(struct mucse_hw *hw)
 	return err;
 }
 
+static int rnpgbe_sw_init(struct mucse *mucse)
+{
+	return 0;
+}
+
 /**
  * rnpgbe_add_adpater - add netdev for this pci_dev
  * @pdev: PCI device information structure
@@ -202,7 +208,7 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 		}
 
 		/* get dma version */
-		dma_version = rnpgbe_rd_reg(hw_addr);
+		dma_version = m_rd_reg(hw_addr);
 		break;
 	case rnpgbe_hw_n210:
 	case rnpgbe_hw_n210L:
@@ -219,7 +225,7 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 		}
 
 		/* get dma version */
-		dma_version = rnpgbe_rd_reg(hw_addr);
+		dma_version = m_rd_reg(hw_addr);
 		break;
 	default:
 		err = -EIO;
@@ -227,8 +233,12 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	}
 	hw->hw_addr = hw_addr;
 	hw->dma.dma_version = dma_version;
+	hw->driver_version = driver_version;
+	hw->nr_lane = 0;
 	ii->get_invariants(hw);
 	hw->mbx.ops.init_params(hw);
+	/* echo fw driver insmod */
+	hw->ops.driver_status(hw, true, mucse_driver_insmod);
 
 	if (mucse_mbx_get_capability(hw)) {
 		dev_err(&pdev->dev,
@@ -237,6 +247,16 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 		goto err_free_net;
 	}
 
+	err = rnpgbe_sw_init(mucse);
+	if (err)
+		goto err_free_net;
+
+	err = hw->ops.reset_hw(hw);
+	if (err) {
+		dev_err(&pdev->dev, "Hw reset failed\n");
+		goto err_free_net;
+	}
+
 	return 0;
 
 err_free_net:
@@ -303,10 +323,13 @@ static int rnpgbe_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 static void rnpgbe_rm_adpater(struct mucse *mucse)
 {
 	struct net_device *netdev;
+	struct mucse_hw *hw = &mucse->hw;
 
 	netdev = mucse->netdev;
 	pr_info("= remove rnpgbe:%s =\n", netdev->name);
+	hw->ops.driver_status(hw, false, mucse_driver_insmod);
 	free_netdev(netdev);
+	mucse->netdev = NULL;
 	pr_info("remove complete\n");
 }
 
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
index 2040b86f4cad..fbb154051313 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
@@ -34,9 +34,8 @@ static inline u32 PF_VF_SHM(struct mucse_mbx_info *mbx, int vf)
 #define MBOX_CTRL_PF_HOLD_SHM (BIT(3)) /* VF:RO, PF:WR */
 #define MBOX_IRQ_EN 0
 #define MBOX_IRQ_DISABLE 1
-#define mbx_rd32(hw, reg) rnpgbe_rd_reg((hw)->hw_addr + (reg))
-#define mbx_wr32(hw, reg, val) rnpgbe_wr_reg((hw)->hw_addr + (reg), (val))
-#define hw_wr32(hw, reg, val) rnpgbe_wr_reg((hw)->hw_addr + (reg), (val))
+#define mbx_rd32(hw, reg) m_rd_reg((hw)->hw_addr + (reg))
+#define mbx_wr32(hw, reg, val) m_wr_reg((hw)->hw_addr + (reg), (val))
 
 extern struct mucse_mbx_operations mucse_mbx_ops_generic;
 
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
index 7fdfccdba80b..8e26ffcabfda 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
@@ -3,6 +3,7 @@
 
 #include <linux/pci.h>
 
+#include "rnpgbe.h"
 #include "rnpgbe_mbx_fw.h"
 
 /**
@@ -139,3 +140,211 @@ int mucse_mbx_get_capability(struct mucse_hw *hw)
 
 	return -EIO;
 }
+
+static struct mbx_req_cookie *mbx_cookie_zalloc(int priv_len)
+{
+	struct mbx_req_cookie *cookie;
+
+	cookie = kzalloc(struct_size(cookie, priv, priv_len), GFP_KERNEL);
+
+	if (cookie) {
+		cookie->timeout_jiffes = 30 * HZ;
+		cookie->magic = COOKIE_MAGIC;
+		cookie->priv_len = priv_len;
+	}
+
+	return cookie;
+}
+
+/**
+ * mucse_mbx_fw_post_req - Posts a mbx req to firmware and wait reply
+ * @hw: Pointer to the HW structure
+ * @req: Pointer to the cmd req structure
+ * @cookie: Pointer to the req cookie
+ *
+ * mucse_mbx_fw_post_req posts a mbx req to firmware and wait for the
+ * reply. cookie->wait will be set in irq handler.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int mucse_mbx_fw_post_req(struct mucse_hw *hw,
+				 struct mbx_fw_cmd_req *req,
+				 struct mbx_req_cookie *cookie)
+{
+	int err = 0;
+
+	/* if pcie off, nothing todo */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+
+	cookie->errcode = 0;
+	cookie->done = 0;
+	init_waitqueue_head(&cookie->wait);
+
+	if (mutex_lock_interruptible(&hw->mbx.lock))
+		return -EAGAIN;
+
+	err = mucse_write_mbx(hw, (u32 *)req,
+			      L_WD(req->datalen + MBX_REQ_HDR_LEN),
+			      MBX_FW);
+	if (err) {
+		mutex_unlock(&hw->mbx.lock);
+		return err;
+	}
+
+	if (cookie->timeout_jiffes != 0) {
+retry:
+		err = wait_event_interruptible_timeout(cookie->wait,
+						       cookie->done == 1,
+						       cookie->timeout_jiffes);
+		if (err == -ERESTARTSYS)
+			goto retry;
+		if (err == 0)
+			err = -ETIME;
+		else
+			err = 0;
+	} else {
+		wait_event_interruptible(cookie->wait, cookie->done == 1);
+	}
+
+	mutex_unlock(&hw->mbx.lock);
+
+	if (cookie->errcode)
+		err = cookie->errcode;
+
+	return err;
+}
+
+int rnpgbe_mbx_lldp_get(struct mucse_hw *hw)
+{
+	int err;
+	struct mbx_req_cookie *cookie = NULL;
+	struct mbx_fw_cmd_reply reply;
+	struct mbx_fw_cmd_req req;
+	struct get_lldp_reply *get_lldp;
+
+	cookie = mbx_cookie_zalloc(sizeof(*get_lldp));
+	if (!cookie)
+		return -ENOMEM;
+
+	get_lldp = (struct get_lldp_reply *)cookie->priv;
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+	build_get_lldp_req(&req, cookie, hw->nr_lane);
+	if (hw->mbx.other_irq_enabled) {
+		err = mucse_mbx_fw_post_req(hw, &req, cookie);
+	} else {
+		err = mucse_fw_send_cmd_wait(hw, &req, &reply);
+		get_lldp = &reply.get_lldp;
+	}
+
+	if (err == 0) {
+		hw->lldp_status.enable = get_lldp->value;
+		hw->lldp_status.inteval = get_lldp->inteval;
+	}
+
+	kfree(cookie);
+
+	return err ? -err : 0;
+}
+
+int mucse_mbx_ifinsmod(struct mucse_hw *hw, int status)
+{
+	int err;
+	struct mbx_fw_cmd_req req;
+	struct mbx_fw_cmd_reply reply;
+
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+	build_ifinsmod(&req, hw->driver_version, status);
+	if (mutex_lock_interruptible(&hw->mbx.lock))
+		return -EAGAIN;
+
+	if (status) {
+		err = hw->mbx.ops.write_posted(hw, (u32 *)&req,
+				L_WD((req.datalen + MBX_REQ_HDR_LEN)),
+				MBX_FW);
+	} else {
+		err = hw->mbx.ops.write(hw, (u32 *)&req,
+				L_WD((req.datalen + MBX_REQ_HDR_LEN)),
+				MBX_FW);
+	}
+
+	mutex_unlock(&hw->mbx.lock);
+	return err;
+}
+
+int mucse_mbx_ifsuspuse(struct mucse_hw *hw, int status)
+{
+	int err;
+	struct mbx_fw_cmd_req req;
+	struct mbx_fw_cmd_reply reply;
+
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+	build_ifsuspuse(&req, hw->nr_lane, status);
+	if (mutex_lock_interruptible(&hw->mbx.lock))
+		return -EAGAIN;
+
+	err = hw->mbx.ops.write_posted(hw, (u32 *)&req,
+				       L_WD((req.datalen + MBX_REQ_HDR_LEN)),
+				       MBX_FW);
+	mutex_unlock(&hw->mbx.lock);
+
+	return err;
+}
+
+int mucse_mbx_ifforce_control_mac(struct mucse_hw *hw, int status)
+{
+	int err;
+	struct mbx_fw_cmd_req req;
+	struct mbx_fw_cmd_reply reply;
+
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+	build_ifforce(&req, hw->nr_lane, status);
+	if (mutex_lock_interruptible(&hw->mbx.lock))
+		return -EAGAIN;
+
+	err = hw->mbx.ops.write_posted(hw, (u32 *)&req,
+				       L_WD((req.datalen + MBX_REQ_HDR_LEN)),
+				       MBX_FW);
+	mutex_unlock(&hw->mbx.lock);
+
+	return err;
+}
+
+/**
+ * mucse_mbx_fw_reset_phy - Posts a mbx req to reset hw
+ * @hw: Pointer to the HW structure
+ *
+ * mucse_mbx_fw_reset_phy posts a mbx req to firmware to reset hw.
+ * It uses mucse_fw_send_cmd_wait if no irq, and mucse_mbx_fw_post_req
+ * if other irq is registered.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int mucse_mbx_fw_reset_phy(struct mucse_hw *hw)
+{
+	struct mbx_fw_cmd_req req;
+	struct mbx_fw_cmd_reply reply;
+	int ret;
+
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+	if (hw->mbx.other_irq_enabled) {
+		struct mbx_req_cookie *cookie = mbx_cookie_zalloc(0);
+
+		if (!cookie)
+			return -ENOMEM;
+
+		build_reset_phy_req(&req, cookie);
+		ret = mucse_mbx_fw_post_req(hw, &req, cookie);
+		kfree(cookie);
+		return ret;
+
+	} else {
+		build_reset_phy_req(&req, &req);
+		return mucse_fw_send_cmd_wait(hw, &req, &reply);
+	}
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
index c5f2c3ff4068..66d8cd02bc0e 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
@@ -25,7 +25,7 @@ struct mbx_req_cookie {
 	wait_queue_head_t wait;
 	int done;
 	int priv_len;
-	char priv[64];
+	char priv[];
 };
 
 enum MUCSE_FW_CMD {
@@ -525,6 +525,80 @@ static inline void build_phy_abalities_req(struct mbx_fw_cmd_req *req,
 	req->cookie = cookie;
 }
 
+static inline void build_get_lldp_req(struct mbx_fw_cmd_req *req, void *cookie,
+				      int nr_lane)
+{
+#define LLDP_TX_GET (1)
+
+	req->flags = 0;
+	req->opcode = LLDP_TX_CTRL;
+	req->datalen = sizeof(req->lldp_tx);
+	req->cookie = cookie;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+	req->lldp_tx.lane = nr_lane;
+	req->lldp_tx.op = LLDP_TX_GET;
+	req->lldp_tx.enable = 0;
+}
+
+static inline void build_ifinsmod(struct mbx_fw_cmd_req *req,
+				  unsigned int nr_lane,
+				  int status)
+{
+	req->flags = 0;
+	req->opcode = DRIVER_INSMOD;
+	req->datalen = sizeof(req->ifinsmod);
+	req->cookie = NULL;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+	req->ifinsmod.lane = nr_lane;
+	req->ifinsmod.status = status;
+}
+
+static inline void build_ifsuspuse(struct mbx_fw_cmd_req *req,
+				   unsigned int nr_lane,
+				   int status)
+{
+	req->flags = 0;
+	req->opcode = SYSTEM_SUSPUSE;
+	req->datalen = sizeof(req->ifsuspuse);
+	req->cookie = NULL;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+	req->ifinsmod.lane = nr_lane;
+	req->ifinsmod.status = status;
+}
+
+static inline void build_ifforce(struct mbx_fw_cmd_req *req,
+				 unsigned int nr_lane,
+				 int status)
+{
+	req->flags = 0;
+	req->opcode = SYSTEM_FORCE;
+	req->datalen = sizeof(req->ifforce);
+	req->cookie = NULL;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+	req->ifforce.lane = nr_lane;
+	req->ifforce.status = status;
+}
+
+static inline void build_reset_phy_req(struct mbx_fw_cmd_req *req,
+				       void *cookie)
+{
+	req->flags = 0;
+	req->opcode = RESET_PHY;
+	req->datalen = 0;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+	req->cookie = cookie;
+}
+
 int mucse_mbx_get_capability(struct mucse_hw *hw);
+int rnpgbe_mbx_lldp_get(struct mucse_hw *hw);
+int mucse_mbx_ifinsmod(struct mucse_hw *hw, int status);
+int mucse_mbx_ifsuspuse(struct mucse_hw *hw, int status);
+int mucse_mbx_ifforce_control_mac(struct mucse_hw *hw, int status);
+int mucse_mbx_fw_reset_phy(struct mucse_hw *hw);
 
 #endif /* _RNPGBE_MBX_FW_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 07/15] net: rnpgbe: Add get mac from hw
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (5 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 06/15] net: rnpgbe: Add some functions for hw->ops Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 08/15] net: rnpgbe: Add irq support Dong Yibo
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize gets mac function for driver use.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  65 ++++++++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   | 126 ++++++++++++++++++
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h |   9 +-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  30 +++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c |  63 +++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h |  17 +++
 6 files changed, 306 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 17297f9ff9c1..93c3e8f50a80 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -33,8 +33,18 @@ struct mucse_dma_info {
 	u32 dma_version;
 };
 
+struct mucse_eth_info;
+
+struct mucse_eth_operations {
+	s32 (*get_mac_addr)(struct mucse_eth_info *eth, u8 *addr);
+	s32 (*set_rar)(struct mucse_eth_info *eth, u32 index, u8 *addr);
+	s32 (*clear_rar)(struct mucse_eth_info *eth, u32 index);
+	void (*clr_mc_addr)(struct mucse_eth_info *eth);
+};
+
 #define RNPGBE_MAX_MTA 128
 struct mucse_eth_info {
+	struct mucse_eth_operations ops;
 	u8 __iomem *eth_base_addr;
 	void *back;
 	u32 mta_shadow[RNPGBE_MAX_MTA];
@@ -61,8 +71,13 @@ struct mucse_mac_info {
 	struct mii_regs mii;
 	int phy_addr;
 	int clk_csr;
-	u8 addr[ETH_ALEN];
-	u8 perm_addr[ETH_ALEN];
+};
+
+struct mucse_addr_filter_info {
+	u32 num_mc_addrs;
+	u32 rar_used_count;
+	u32 mta_in_use;
+	bool user_set_promisc;
 };
 
 struct mucse_hw;
@@ -154,6 +169,7 @@ struct mucse_hw_operations {
 	int (*init_hw)(struct mucse_hw *hw);
 	int (*reset_hw)(struct mucse_hw *hw);
 	void (*start_hw)(struct mucse_hw *hw);
+	void (*init_rx_addrs)(struct mucse_hw *hw);
 	/* ops to fw */
 	void (*driver_status)(struct mucse_hw *hw, bool enable, int mode);
 };
@@ -177,6 +193,10 @@ struct mucse_hw {
 	u16 subsystem_vendor_id;
 	u32 wol;
 	u32 wol_en;
+	u16 min_len_cap;
+	u16 max_len_cap;
+	u16 min_len_cur;
+	u16 max_len_cur;
 	u32 fw_version;
 	u32 axi_mhz;
 	u32 bd_uid;
@@ -191,6 +211,7 @@ struct mucse_hw {
 	struct mucse_eth_info eth;
 	struct mucse_mac_info mac;
 	struct mucse_mbx_info mbx;
+	struct mucse_addr_filter_info addr_ctrl;
 #define M_NET_FEATURE_SG ((u32)(1 << 0))
 #define M_NET_FEATURE_TX_CHECKSUM ((u32)(1 << 1))
 #define M_NET_FEATURE_RX_CHECKSUM ((u32)(1 << 2))
@@ -211,11 +232,27 @@ struct mucse_hw {
 #define M_HW_FEATURE_EEE ((u32)(1 << 17))
 #define M_HW_SOFT_MASK_OTHER_IRQ ((u32)(1 << 18))
 	u32 feature_flags;
+	u32 flags;
+#define M_FLAGS_INIT_MAC_ADDRESS ((u32)(1 << 27))
 	u32 driver_version;
 	u16 usecstocount;
+	u16 max_msix_vectors;
 	int nr_lane;
 	struct lldp_status lldp_status;
 	int link;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+};
+
+enum mucse_state_t {
+	__MUCSE_DOWN,
+	__MUCSE_SERVICE_SCHED,
+	__MUCSE_PTP_TX_IN_PROGRESS,
+	__MUCSE_USE_VFINFI,
+	__MUCSE_IN_IRQ,
+	__MUCSE_REMOVE,
+	__MUCSE_SERVICE_CHECK,
+	__MUCSE_EEE_REMOVE,
 };
 
 struct mucse {
@@ -224,9 +261,20 @@ struct mucse {
 	struct mucse_hw hw;
 	/* board number */
 	u16 bd_number;
+	u16 tx_work_limit;
 	u32 flags2;
 #define M_FLAG2_NO_NET_REG ((u32)(1 << 0))
-
+	u32 priv_flags;
+#define M_PRIV_FLAG_TX_COALESCE ((u32)(1 << 25))
+#define M_PRIV_FLAG_RX_COALESCE ((u32)(1 << 26))
+	int tx_ring_item_count;
+	int rx_ring_item_count;
+	int napi_budge;
+	u16 rx_usecs;
+	u16 rx_frames;
+	u16 tx_frames;
+	u16 tx_usecs;
+	unsigned long state;
 	char name[60];
 };
 
@@ -247,6 +295,15 @@ struct rnpgbe_info {
 #define PCI_DEVICE_ID_N210 0x8208
 #define PCI_DEVICE_ID_N210L 0x820a
 
+#define M_DEFAULT_TXD (512)
+#define M_DEFAULT_TX_WORK (128)
+#define M_PKT_TIMEOUT_TX (200)
+#define M_TX_PKT_POLL_BUDGET (0x30)
+
+#define M_DEFAULT_RXD (512)
+#define M_PKT_TIMEOUT (30)
+#define M_RX_PKT_POLL_BUDGET (64)
+
 #define m_rd_reg(reg) readl((void *)(reg))
 #define m_wr_reg(reg, val) writel((val), (void *)(reg))
 #define hw_wr32(hw, reg, val) m_wr_reg((hw)->hw_addr + (reg), (val))
@@ -261,4 +318,6 @@ struct rnpgbe_info {
 #define mucse_dbg(mucse, fmt, arg...) \
 	dev_dbg(&(mucse)->pdev->dev, fmt, ##arg)
 
+/* error codes */
+#define MUCSE_ERR_INVALID_ARGUMENT (-1)
 #endif /* _RNPGBE_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
index c495a6f79fd0..5b01ecd641f2 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -3,12 +3,88 @@
 
 #include <linux/types.h>
 #include <linux/string.h>
+#include <linux/etherdevice.h>
 
 #include "rnpgbe.h"
 #include "rnpgbe_hw.h"
 #include "rnpgbe_mbx.h"
 #include "rnpgbe_mbx_fw.h"
 
+/**
+ * rnpgbe_eth_set_rar_n500 - Set Rx address register
+ * @eth: pointer to eth structure
+ * @index: Receive address register to write
+ * @addr: Address to put into receive address register
+ *
+ * Puts an ethernet address into a receive address register.
+ **/
+static s32 rnpgbe_eth_set_rar_n500(struct mucse_eth_info *eth,
+				   u32 index, u8 *addr)
+{
+	u32 mcstctrl;
+	u32 rar_low, rar_high = 0;
+	u32 rar_entries = eth->num_rar_entries;
+
+	/* Make sure we are using a valid rar index range */
+	if (index >= rar_entries)
+		return MUCSE_ERR_INVALID_ARGUMENT;
+
+	/*
+	 * HW expects these in big endian so we reverse the byte
+	 * order from network order (big endian) to little endian
+	 */
+	rar_low = ((u32)addr[5] | ((u32)addr[4] << 8) |
+		   ((u32)addr[3] << 16) |
+		   ((u32)addr[2] << 24));
+	rar_high |= ((u32)addr[1] | ((u32)addr[0] << 8));
+	rar_high |= RNPGBE_RAH_AV;
+
+	eth_wr32(eth, RNPGBE_ETH_RAR_RL(index), rar_low);
+	eth_wr32(eth, RNPGBE_ETH_RAR_RH(index), rar_high);
+	/* open unicast filter */
+	mcstctrl = eth_rd32(eth, RNPGBE_ETH_DMAC_MCSTCTRL);
+	mcstctrl |= RNPGBE_MCSTCTRL_UNICASE_TBL_EN;
+	eth_wr32(eth, RNPGBE_ETH_DMAC_MCSTCTRL, mcstctrl);
+
+	return 0;
+}
+
+/**
+ * rnpgbe_eth_clear_rar_n500 - Remove Rx address register
+ * @eth: pointer to eth structure
+ * @index: Receive address register to write
+ *
+ * Clears an ethernet address from a receive address register.
+ **/
+static s32 rnpgbe_eth_clear_rar_n500(struct mucse_eth_info *eth,
+				     u32 index)
+{
+	u32 rar_entries = eth->num_rar_entries;
+
+	/* Make sure we are using a valid rar index range */
+	if (index >= rar_entries)
+		return MUCSE_ERR_INVALID_ARGUMENT;
+
+	eth_wr32(eth, RNPGBE_ETH_RAR_RL(index), 0);
+	eth_wr32(eth, RNPGBE_ETH_RAR_RH(index), 0);
+
+	return 0;
+}
+
+static void rnpgbe_eth_clr_mc_addr_n500(struct mucse_eth_info *eth)
+{
+	int i;
+
+	for (i = 0; i < eth->mcft_size; i++)
+		eth_wr32(eth, RNPGBE_ETH_MUTICAST_HASH_TABLE(i), 0);
+}
+
+static struct mucse_eth_operations eth_ops_n500 = {
+	.set_rar = &rnpgbe_eth_set_rar_n500,
+	.clear_rar = &rnpgbe_eth_clear_rar_n500,
+	.clr_mc_addr = &rnpgbe_eth_clr_mc_addr_n500
+};
+
 static int rnpgbe_init_hw_ops_n500(struct mucse_hw *hw)
 {
 	int status = 0;
@@ -20,6 +96,19 @@ static int rnpgbe_init_hw_ops_n500(struct mucse_hw *hw)
 	return status;
 }
 
+static void rnpgbe_get_permtion_mac(struct mucse_hw *hw,
+				    u8 *mac_addr)
+{
+	if (mucse_fw_get_macaddr(hw, hw->pfvfnum, mac_addr, hw->nr_lane)) {
+		eth_random_addr(mac_addr);
+	} else {
+		if (!is_valid_ether_addr(mac_addr))
+			eth_random_addr(mac_addr);
+	}
+
+	hw->flags |= M_FLAGS_INIT_MAC_ADDRESS;
+}
+
 /**
  * rnpgbe_reset_hw_ops_n500 - Do a hardware reset
  * @hw: hw information structure
@@ -37,7 +126,13 @@ static int rnpgbe_reset_hw_ops_n500(struct mucse_hw *hw)
 	dma_wr32(dma, RNPGBE_DMA_AXI_EN, 0);
 	if (mucse_mbx_fw_reset_phy(hw))
 		return -EIO;
+	/* Store the permanent mac address */
+	if (!(hw->flags & M_FLAGS_INIT_MAC_ADDRESS)) {
+		rnpgbe_get_permtion_mac(hw, hw->perm_addr);
+		memcpy(hw->addr, hw->perm_addr, ETH_ALEN);
+	}
 
+	hw->ops.init_rx_addrs(hw);
 	eth_wr32(eth, RNPGBE_ETH_ERR_MASK_VECTOR,
 		 RNPGBE_PKT_LEN_ERR | RNPGBE_HDR_LEN_ERR);
 	dma_wr32(dma, RNPGBE_DMA_RX_PROG_FULL_THRESH, 0xa);
@@ -89,10 +184,38 @@ static void rnpgbe_driver_status_hw_ops_n500(struct mucse_hw *hw,
 	}
 }
 
+static void rnpgbe_init_rx_addrs_hw_ops_n500(struct mucse_hw *hw)
+{
+	struct mucse_eth_info *eth = &hw->eth;
+	u32 i;
+	u32 rar_entries = eth->num_rar_entries;
+	u32 v;
+
+	/* hw->addr maybe set by sw */
+	if (!is_valid_ether_addr(hw->addr))
+		memcpy(hw->addr, hw->perm_addr, ETH_ALEN);
+	else
+		eth->ops.set_rar(eth, 0, hw->addr);
+
+	hw->addr_ctrl.rar_used_count = 1;
+	/* Clear other rar addresses. */
+	for (i = 1; i < rar_entries; i++)
+		eth->ops.clear_rar(eth, i);
+
+	/* Clear the MTA */
+	hw->addr_ctrl.mta_in_use = 0;
+	v = eth_rd32(eth, RNPGBE_ETH_DMAC_MCSTCTRL);
+	v &= (~0x3);
+	v |= eth->mc_filter_type;
+	eth_wr32(eth, RNPGBE_ETH_DMAC_MCSTCTRL, v);
+	eth->ops.clr_mc_addr(eth);
+}
+
 static struct mucse_hw_operations hw_ops_n500 = {
 	.init_hw = &rnpgbe_init_hw_ops_n500,
 	.reset_hw = &rnpgbe_reset_hw_ops_n500,
 	.start_hw = &rnpgbe_start_hw_ops_n500,
+	.init_rx_addrs = &rnpgbe_init_rx_addrs_hw_ops_n500,
 	.driver_status = &rnpgbe_driver_status_hw_ops_n500,
 };
 
@@ -121,6 +244,7 @@ static void rnpgbe_get_invariants_n500(struct mucse_hw *hw)
 	dma->max_rx_queues = RNPGBE_MAX_QUEUES;
 	dma->back = hw;
 	/* setup eth info */
+	memcpy(&hw->eth.ops, &eth_ops_n500, sizeof(hw->eth.ops));
 	eth->eth_base_addr = hw->hw_addr + RNPGBE_ETH_BASE;
 	eth->back = hw;
 	eth->mc_filter_type = 0;
@@ -170,6 +294,8 @@ static void rnpgbe_get_invariants_n500(struct mucse_hw *hw)
 	hw->usecstocount = 125;
 	hw->max_vfs_noari = 1;
 	hw->max_vfs = 7;
+	hw->min_len_cap = RNPGBE_MIN_LEN;
+	hw->max_len_cap = RNPGBE_MAX_LEN;
 	memcpy(&hw->ops, &hw_ops_n500, sizeof(hw->ops));
 }
 
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
index 35e3cb77a38b..bcb4da45feac 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
@@ -29,6 +29,12 @@
 #define RNPGBE_ETH_DEFAULT_RX_RING (0x806c)
 #define RNPGBE_PKT_LEN_ERR (2)
 #define RNPGBE_HDR_LEN_ERR (1)
+#define RNPGBE_MCSTCTRL_UNICASE_TBL_EN BIT(3)
+#define RNPGBE_ETH_DMAC_MCSTCTRL (0x9114)
+#define RNPGBE_RAH_AV (0x80000000)
+#define RNPGBE_ETH_RAR_RL(n) (0xa000 + 0x04 * (n))
+#define RNPGBE_ETH_RAR_RH(n) (0xa400 + 0x04 * (n))
+#define RNPGBE_ETH_MUTICAST_HASH_TABLE(n) (0xac00 + 0x04 * (n))
 /* chip resourse */
 #define RNPGBE_MAX_QUEUES (8)
 /* multicast control table */
@@ -36,7 +42,8 @@
 /* vlan filter table */
 #define RNPGBE_VFT_TBL_SIZE (128)
 #define RNPGBE_RAR_ENTRIES (32)
-
+#define RNPGBE_MIN_LEN (68)
+#define RNPGBE_MAX_LEN (9722)
 #define RNPGBE_MII_ADDR 0x00000010 /* MII Address */
 #define RNPGBE_MII_DATA 0x00000014 /* MII Data */
 #endif /* _RNPGBE_HW_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index e811e9624ead..d99da9838e27 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -150,6 +150,27 @@ static int init_firmware_for_n210(struct mucse_hw *hw)
 
 static int rnpgbe_sw_init(struct mucse *mucse)
 {
+	struct mucse_hw *hw = &mucse->hw;
+	struct pci_dev *pdev = mucse->pdev;
+
+	hw->vendor_id = pdev->vendor;
+	hw->device_id = pdev->device;
+	hw->subsystem_vendor_id = pdev->subsystem_vendor;
+	hw->subsystem_device_id = pdev->subsystem_device;
+	mucse->napi_budge = 64;
+	/* set default work limits */
+	mucse->tx_work_limit = M_DEFAULT_TX_WORK;
+	mucse->tx_usecs = M_PKT_TIMEOUT_TX;
+	mucse->tx_frames = M_TX_PKT_POLL_BUDGET;
+	mucse->rx_usecs = M_PKT_TIMEOUT;
+	mucse->rx_frames = M_RX_PKT_POLL_BUDGET;
+	mucse->priv_flags &= ~M_PRIV_FLAG_RX_COALESCE;
+	mucse->priv_flags &= ~M_PRIV_FLAG_TX_COALESCE;
+	/* set default ring sizes */
+	mucse->tx_ring_item_count = M_DEFAULT_TXD;
+	mucse->rx_ring_item_count = M_DEFAULT_RXD;
+	set_bit(__MUCSE_DOWN, &mucse->state);
+
 	return 0;
 }
 
@@ -257,6 +278,15 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 		goto err_free_net;
 	}
 
+	netdev->min_mtu = hw->min_len_cap;
+	netdev->max_mtu = hw->max_len_cap - (ETH_HLEN + 2 * ETH_FCS_LEN);
+	netdev->features |= NETIF_F_HIGHDMA;
+	netdev->priv_flags |= IFF_UNICAST_FLT;
+	netdev->priv_flags |= IFF_SUPP_NOFCS;
+	eth_hw_addr_set(netdev, hw->perm_addr);
+	strscpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
+	memcpy(netdev->perm_addr, hw->perm_addr, netdev->addr_len);
+
 	return 0;
 
 err_free_net:
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
index 8e26ffcabfda..a9c5caa764a0 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
@@ -348,3 +348,66 @@ int mucse_mbx_fw_reset_phy(struct mucse_hw *hw)
 		return mucse_fw_send_cmd_wait(hw, &req, &reply);
 	}
 }
+
+/**
+ * mucse_fw_get_macaddr - Posts a mbx req to request macaddr
+ * @hw: Pointer to the HW structure
+ * @pfvfnum: Index of pf/vf num
+ * @mac_addr: Pointer to store mac_addr
+ * @nr_lane: Lane index
+ *
+ * mucse_fw_get_macaddr posts a mbx req to firmware to get mac_addr.
+ * It uses mucse_fw_send_cmd_wait if no irq, and mucse_mbx_fw_post_req
+ * if other irq is registered.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int mucse_fw_get_macaddr(struct mucse_hw *hw, int pfvfnum,
+			 u8 *mac_addr,
+			 int nr_lane)
+{
+	int err = 0;
+	struct mbx_fw_cmd_req req;
+	struct mbx_fw_cmd_reply reply;
+
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+
+	if (!mac_addr)
+		return -EINVAL;
+
+	if (hw->mbx.other_irq_enabled) {
+		struct mbx_req_cookie *cookie =
+			mbx_cookie_zalloc(sizeof(reply.mac_addr));
+		struct mac_addr *mac = (struct mac_addr *)cookie->priv;
+
+		if (!cookie)
+			return -ENOMEM;
+
+		build_get_macaddress_req(&req, 1 << nr_lane, pfvfnum, cookie);
+		err = mucse_mbx_fw_post_req(hw, &req, cookie);
+		if (err) {
+			kfree(cookie);
+			goto out;
+		}
+
+		if ((1 << nr_lane) & mac->lanes)
+			memcpy(mac_addr, mac->addrs[nr_lane].mac, ETH_ALEN);
+		else
+			err = -ENODATA;
+
+		kfree(cookie);
+	} else {
+		build_get_macaddress_req(&req, 1 << nr_lane, pfvfnum, &req);
+		err = mucse_fw_send_cmd_wait(hw, &req, &reply);
+		if (err)
+			goto out;
+
+		if ((1 << nr_lane) & reply.mac_addr.lanes)
+			memcpy(mac_addr, reply.mac_addr.addrs[nr_lane].mac, 6);
+		else
+			err = -ENODATA;
+	}
+out:
+	return err;
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
index 66d8cd02bc0e..babdfc1f56f1 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
@@ -594,11 +594,28 @@ static inline void build_reset_phy_req(struct mbx_fw_cmd_req *req,
 	req->cookie = cookie;
 }
 
+static inline void build_get_macaddress_req(struct mbx_fw_cmd_req *req,
+					    int lane_mask, int pfvfnum,
+					    void *cookie)
+{
+	req->flags = 0;
+	req->opcode = GET_MAC_ADDRES;
+	req->datalen = sizeof(req->get_mac_addr);
+	req->cookie = cookie;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+
+	req->get_mac_addr.lane_mask = lane_mask;
+	req->get_mac_addr.pfvf_num = pfvfnum;
+}
+
 int mucse_mbx_get_capability(struct mucse_hw *hw);
 int rnpgbe_mbx_lldp_get(struct mucse_hw *hw);
 int mucse_mbx_ifinsmod(struct mucse_hw *hw, int status);
 int mucse_mbx_ifsuspuse(struct mucse_hw *hw, int status);
 int mucse_mbx_ifforce_control_mac(struct mucse_hw *hw, int status);
 int mucse_mbx_fw_reset_phy(struct mucse_hw *hw);
+int mucse_fw_get_macaddr(struct mucse_hw *hw, int pfvfnum,
+			 u8 *mac_addr, int nr_lane);
 
 #endif /* _RNPGBE_MBX_FW_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 08/15] net: rnpgbe: Add irq support
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (6 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 07/15] net: rnpgbe: Add get mac from hw Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 09/15] net: rnpgbe: Add netdev register and init tx/rx memory Dong Yibo
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize irq functions for driver use.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/Makefile    |   3 +-
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    | 156 +++++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   |   2 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c    | 462 ++++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h    |  26 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   | 129 ++++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c |   6 +-
 7 files changed, 777 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
 create mode 100644 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h

diff --git a/drivers/net/ethernet/mucse/rnpgbe/Makefile b/drivers/net/ethernet/mucse/rnpgbe/Makefile
index db7d3a8140b2..c5a41406fd60 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/Makefile
+++ b/drivers/net/ethernet/mucse/rnpgbe/Makefile
@@ -9,4 +9,5 @@ rnpgbe-objs := rnpgbe_main.o \
 	       rnpgbe_chip.o \
 	       rnpgbe_mbx.o \
 	       rnpgbe_mbx_fw.o \
-	       rnpgbe_sfc.o
+	       rnpgbe_sfc.o \
+	       rnpgbe_lib.o
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 93c3e8f50a80..82df7f133f10 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -6,6 +6,7 @@
 
 #include <linux/types.h>
 #include <linux/netdevice.h>
+#include <linux/pci.h>
 
 extern const struct rnpgbe_info rnpgbe_n500_info;
 extern const struct rnpgbe_info rnpgbe_n210_info;
@@ -135,7 +136,7 @@ struct mucse_mbx_info {
 	u16 fw_ack;
 	/* lock for only one use mbx */
 	struct mutex lock;
-	bool other_irq_enabled;
+	bool irq_enabled;
 	int mbx_size;
 	int mbx_mem_size;
 #define MBX_FEATURE_NO_ZERO BIT(0)
@@ -233,6 +234,7 @@ struct mucse_hw {
 #define M_HW_SOFT_MASK_OTHER_IRQ ((u32)(1 << 18))
 	u32 feature_flags;
 	u32 flags;
+#define M_FLAG_MSI_CAPABLE ((u32)(1 << 0))
 #define M_FLAGS_INIT_MAC_ADDRESS ((u32)(1 << 27))
 	u32 driver_version;
 	u16 usecstocount;
@@ -255,6 +257,136 @@ enum mucse_state_t {
 	__MUCSE_EEE_REMOVE,
 };
 
+enum irq_mode_enum {
+	irq_mode_legency,
+	irq_mode_msi,
+	irq_mode_msix,
+};
+
+struct mucse_queue_stats {
+	u64 packets;
+	u64 bytes;
+};
+
+struct mucse_tx_queue_stats {
+	u64 restart_queue;
+	u64 tx_busy;
+	u64 tx_done_old;
+	u64 clean_desc;
+	u64 poll_count;
+	u64 irq_more_count;
+	u64 send_bytes;
+	u64 send_bytes_to_hw;
+	u64 todo_update;
+	u64 send_done_bytes;
+	u64 vlan_add;
+	u64 tx_next_to_clean;
+	u64 tx_irq_miss;
+	u64 tx_equal_count;
+	u64 tx_clean_times;
+	u64 tx_clean_count;
+};
+
+struct mucse_rx_queue_stats {
+	u64 driver_drop_packets;
+	u64 rsc_count;
+	u64 rsc_flush;
+	u64 non_eop_descs;
+	u64 alloc_rx_page_failed;
+	u64 alloc_rx_buff_failed;
+	u64 alloc_rx_page;
+	u64 csum_err;
+	u64 csum_good;
+	u64 poll_again_count;
+	u64 vlan_remove;
+	u64 rx_next_to_clean;
+	u64 rx_irq_miss;
+	u64 rx_equal_count;
+	u64 rx_clean_times;
+	u64 rx_clean_count;
+};
+
+struct mucse_ring {
+	struct mucse_ring *next;
+	struct mucse_q_vector *q_vector;
+	struct net_device *netdev;
+	struct device *dev;
+	void *desc;
+	union {
+		struct mucse_tx_buffer *tx_buffer_info;
+		struct mucse_rx_buffer *rx_buffer_info;
+	};
+	unsigned long last_rx_timestamp;
+	unsigned long state;
+	u8 __iomem *ring_addr;
+	u8 __iomem *tail;
+	u8 __iomem *dma_int_stat;
+	u8 __iomem *dma_int_mask;
+	u8 __iomem *dma_int_clr;
+	dma_addr_t dma;
+	unsigned int size;
+	u32 ring_flags;
+#define M_RING_FLAG_DELAY_SETUP_RX_LEN ((u32)(1 << 0))
+#define M_RING_FLAG_CHANGE_RX_LEN ((u32)(1 << 1))
+#define M_RING_FLAG_DO_RESET_RX_LEN ((u32)(1 << 2))
+#define M_RING_SKIP_TX_START ((u32)(1 << 3))
+#define M_RING_NO_TUNNEL_SUPPORT ((u32)(1 << 4))
+#define M_RING_SIZE_CHANGE_FIX ((u32)(1 << 5))
+#define M_RING_SCATER_SETUP ((u32)(1 << 6))
+#define M_RING_STAGS_SUPPORT ((u32)(1 << 7))
+#define M_RING_DOUBLE_VLAN_SUPPORT ((u32)(1 << 8))
+#define M_RING_VEB_MULTI_FIX ((u32)(1 << 9))
+#define M_RING_IRQ_MISS_FIX ((u32)(1 << 10))
+#define M_RING_OUTER_VLAN_FIX ((u32)(1 << 11))
+#define M_RING_CHKSM_FIX ((u32)(1 << 12))
+#define M_RING_LOWER_ITR ((u32)(1 << 13))
+	u8 pfvfnum;
+	u16 count;
+	u16 temp_count;
+	u16 reset_count;
+	u8 queue_index;
+	u8 rnpgbe_queue_idx;
+	u16 next_to_use;
+	u16 next_to_clean;
+	u16 device_id;
+	struct mucse_queue_stats stats;
+	struct u64_stats_sync syncp;
+	union {
+		struct mucse_tx_queue_stats tx_stats;
+		struct mucse_rx_queue_stats rx_stats;
+	};
+} ____cacheline_internodealigned_in_smp;
+
+struct mucse_ring_container {
+	struct mucse_ring *ring;
+	u16 work_limit;
+	u16 count;
+};
+
+struct mucse_q_vector {
+	struct mucse *mucse;
+	int v_idx;
+	u16 itr_rx;
+	u16 itr_tx;
+	struct mucse_ring_container rx, tx;
+	struct napi_struct napi;
+	cpumask_t affinity_mask;
+	struct irq_affinity_notify affinity_notify;
+	int numa_node;
+	struct rcu_head rcu; /* to avoid race with update stats on free */
+	u32 vector_flags;
+#define M_QVECTOR_FLAG_IRQ_MISS_CHECK ((u32)(1 << 0))
+#define M_QVECTOR_FLAG_ITR_FEATURE ((u32)(1 << 1))
+#define M_QVECTOR_FLAG_REDUCE_TX_IRQ_MISS ((u32)(1 << 2))
+	char name[IFNAMSIZ + 9];
+	/* for dynamic allocation of rings associated with this q_vector */
+	struct mucse_ring ring[0] ____cacheline_internodealigned_in_smp;
+};
+
+#define MAX_TX_QUEUES (8)
+#define MAX_RX_QUEUES (8)
+#define MAX_Q_VECTORS (64)
+
 struct mucse {
 	struct net_device *netdev;
 	struct pci_dev *pdev;
@@ -262,19 +394,37 @@ struct mucse {
 	/* board number */
 	u16 bd_number;
 	u16 tx_work_limit;
+	u32 flags;
+#define M_FLAG_NEED_LINK_UPDATE ((u32)(1 << 0))
+#define M_FLAG_MSIX_ENABLED ((u32)(1 << 1))
+#define M_FLAG_MSI_ENABLED ((u32)(1 << 2))
 	u32 flags2;
 #define M_FLAG2_NO_NET_REG ((u32)(1 << 0))
+#define M_FLAG2_INSMOD ((u32)(1 << 1))
 	u32 priv_flags;
 #define M_PRIV_FLAG_TX_COALESCE ((u32)(1 << 25))
 #define M_PRIV_FLAG_RX_COALESCE ((u32)(1 << 26))
+	struct mucse_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
 	int tx_ring_item_count;
+	int num_tx_queues;
+	struct mucse_ring *rx_ring[MAX_RX_QUEUES] ____cacheline_aligned_in_smp;
 	int rx_ring_item_count;
+	int num_rx_queues;
+	int num_other_vectors;
+	int irq_mode;
+	struct msix_entry *msix_entries;
+	struct mucse_q_vector *q_vector[MAX_Q_VECTORS];
+	int num_q_vectors;
+	int max_q_vectors;
+	int q_vector_off;
 	int napi_budge;
 	u16 rx_usecs;
 	u16 rx_frames;
 	u16 tx_frames;
 	u16 tx_usecs;
 	unsigned long state;
+	struct timer_list service_timer;
+	struct work_struct service_task;
 	char name[60];
 };
 
@@ -320,4 +470,8 @@ struct rnpgbe_info {
 
 /* error codes */
 #define MUCSE_ERR_INVALID_ARGUMENT (-1)
+
+void rnpgbe_service_event_schedule(struct mucse *mucse);
+int rnpgbe_poll(struct napi_struct *napi, int budget);
+
 #endif /* _RNPGBE_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
index 5b01ecd641f2..e94a432dd7b6 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -296,6 +296,8 @@ static void rnpgbe_get_invariants_n500(struct mucse_hw *hw)
 	hw->max_vfs = 7;
 	hw->min_len_cap = RNPGBE_MIN_LEN;
 	hw->max_len_cap = RNPGBE_MAX_LEN;
+	hw->max_msix_vectors = 26;
+	hw->flags |= M_FLAG_MSI_CAPABLE;
 	memcpy(&hw->ops, &hw_ops_n500, sizeof(hw->ops));
 }
 
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
new file mode 100644
index 000000000000..95c913212182
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
@@ -0,0 +1,462 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#include "rnpgbe.h"
+#include "rnpgbe_lib.h"
+
+/**
+ * rnpgbe_set_rss_queues - Allocate queues for RSS
+ * @mucse: pointer to private structure
+ *
+ * Try to determine queue num with rss.
+ *
+ **/
+static bool rnpgbe_set_rss_queues(struct mucse *mucse)
+{
+	return true;
+}
+
+/**
+ * rnpgbe_set_num_queues - Allocate queues for device, feature dependent
+ * @mucse: pointer to private structure
+ *
+ * Determine tx/rx queue nums
+ **/
+static void rnpgbe_set_num_queues(struct mucse *mucse)
+{
+	/* Start with base case */
+	mucse->num_tx_queues = 1;
+	mucse->num_rx_queues = 1;
+
+	rnpgbe_set_rss_queues(mucse);
+}
+
+static int rnpgbe_acquire_msix_vectors(struct mucse *mucse,
+				       int vectors)
+{
+	int err;
+
+	err = pci_enable_msix_range(mucse->pdev, mucse->msix_entries,
+				    vectors, vectors);
+	if (err < 0) {
+		kfree(mucse->msix_entries);
+		mucse->msix_entries = NULL;
+		return -EINVAL;
+	}
+
+	vectors -= mucse->num_other_vectors;
+	/* setup true q_vectors num */
+	mucse->num_q_vectors = min(vectors, mucse->max_q_vectors);
+
+	return 0;
+}
+
+/**
+ * rnpgbe_set_interrupt_capability - set MSI-X or MSI if supported
+ * @mucse: pointer to private structure
+ *
+ * Attempt to configure the interrupts using the best available
+ * capabilities of the hardware.
+ **/
+static int rnpgbe_set_interrupt_capability(struct mucse *mucse)
+{
+	struct mucse_hw *hw = &mucse->hw;
+	int vector, v_budget, err = 0;
+	int irq_mode_back = mucse->irq_mode;
+
+	v_budget = min_t(int, mucse->num_tx_queues, mucse->num_rx_queues);
+	v_budget = min_t(int, v_budget, num_online_cpus());
+	v_budget += mucse->num_other_vectors;
+	v_budget = min_t(int, v_budget, hw->max_msix_vectors);
+
+	if (mucse->irq_mode == irq_mode_msix) {
+		mucse->msix_entries = kcalloc(v_budget,
+					      sizeof(struct msix_entry),
+					      GFP_KERNEL);
+
+		if (!mucse->msix_entries)
+			return -EINVAL;
+
+		for (vector = 0; vector < v_budget; vector++)
+			mucse->msix_entries[vector].entry = vector;
+
+		err = rnpgbe_acquire_msix_vectors(mucse, v_budget);
+		if (!err) {
+			if (mucse->num_other_vectors)
+				mucse->q_vector_off = 1;
+			mucse->flags |= M_FLAG_MSIX_ENABLED;
+			goto out;
+		}
+		kfree(mucse->msix_entries);
+		/* if has msi capability try it */
+		if (hw->flags & M_FLAG_MSI_CAPABLE)
+			mucse->irq_mode = irq_mode_msi;
+	}
+	/* if has msi capability or set irq_mode */
+	if (mucse->irq_mode == irq_mode_msi) {
+		err = pci_enable_msi(mucse->pdev);
+		/* msi mode use only 1 irq */
+		if (!err)
+			mucse->flags |= M_FLAG_MSI_ENABLED;
+	}
+	/* write back origin irq_mode for next time */
+	mucse->irq_mode = irq_mode_back;
+	/* legacy and msi only 1 vectors */
+	mucse->num_q_vectors = 1;
+	err = 0;
+out:
+	return err;
+}
+
+static void update_ring_count(struct mucse *mucse)
+{
+	if (mucse->flags2 & M_FLAG2_INSMOD)
+		return;
+
+	mucse->flags2 |= M_FLAG2_INSMOD;
+	/* limit ring count if in msi or legacy mode */
+	if (!(mucse->flags & M_FLAG_MSIX_ENABLED)) {
+		mucse->num_tx_queues = 1;
+		mucse->num_rx_queues = 1;
+	}
+}
+
+static void mucse_add_ring(struct mucse_ring *ring,
+			   struct mucse_ring_container *head)
+{
+	ring->next = head->ring;
+	head->ring = ring;
+	head->count++;
+}
+
+/**
+ * rnpgbe_alloc_q_vector - Allocate memory for a single interrupt vector
+ * @mucse: pointer to private structure
+ * @eth_queue_idx: queue_index idx for this q_vector
+ * @v_idx: index of vector used for this q_vector
+ * @r_idx: total number of rings to allocate
+ * @r_count: ring count
+ * @step: ring step
+ *
+ * We allocate one q_vector.  If allocation fails we return -ENOMEM.
+ **/
+static int rnpgbe_alloc_q_vector(struct mucse *mucse,
+				 int eth_queue_idx, int v_idx, int r_idx,
+				 int r_count, int step)
+{
+	struct mucse_q_vector *q_vector;
+	struct mucse_ring *ring;
+	struct mucse_hw *hw = &mucse->hw;
+	struct mucse_dma_info *dma = &hw->dma;
+	int node = NUMA_NO_NODE;
+	int cpu = -1;
+	int ring_count, size;
+	int txr_count, rxr_count, idx;
+	int rxr_idx = r_idx, txr_idx = r_idx;
+	int cpu_offset = 0;
+
+	txr_count = r_count;
+	rxr_count = r_count;
+	ring_count = txr_count + rxr_count;
+	size = sizeof(struct mucse_q_vector) +
+	       (sizeof(struct mucse_ring) * ring_count);
+
+	/* should minis mucse->q_vector_off */
+	if (cpu_online(cpu_offset + v_idx - mucse->q_vector_off)) {
+		cpu = cpu_offset + v_idx - mucse->q_vector_off;
+		node = cpu_to_node(cpu);
+	}
+
+	/* allocate q_vector and rings */
+	q_vector = kzalloc_node(size, GFP_KERNEL, node);
+	if (!q_vector)
+		q_vector = kzalloc(size, GFP_KERNEL);
+	if (!q_vector)
+		return -ENOMEM;
+
+	/* setup affinity mask and node */
+	if (cpu != -1)
+		cpumask_set_cpu(cpu, &q_vector->affinity_mask);
+	q_vector->numa_node = node;
+
+	netif_napi_add_weight(mucse->netdev, &q_vector->napi, rnpgbe_poll,
+			      mucse->napi_budge);
+	/* tie q_vector and mucse together */
+	mucse->q_vector[v_idx - mucse->q_vector_off] = q_vector;
+	q_vector->mucse = mucse;
+	q_vector->v_idx = v_idx;
+	/* initialize pointer to rings */
+	ring = q_vector->ring;
+
+	for (idx = 0; idx < txr_count; idx++) {
+		/* assign generic ring traits */
+		ring->dev = &mucse->pdev->dev;
+		ring->netdev = mucse->netdev;
+		/* configure backlink on ring */
+		ring->q_vector = q_vector;
+		/* update q_vector Tx values */
+		mucse_add_ring(ring, &q_vector->tx);
+
+		/* apply Tx specific ring traits */
+		ring->count = mucse->tx_ring_item_count;
+		ring->queue_index = eth_queue_idx + idx;
+		/* it is used to location hw reg */
+		ring->rnpgbe_queue_idx = txr_idx;
+		ring->ring_addr = dma->dma_ring_addr + RING_OFFSET(txr_idx);
+		ring->dma_int_stat = ring->ring_addr + DMA_INT_STAT;
+		ring->dma_int_mask = ring->ring_addr + DMA_INT_MASK;
+		ring->dma_int_clr = ring->ring_addr + DMA_INT_CLR;
+		ring->device_id = mucse->pdev->device;
+		ring->pfvfnum = hw->pfvfnum;
+		/* not support tunnel */
+		ring->ring_flags |= M_RING_NO_TUNNEL_SUPPORT;
+		/* assign ring to mucse */
+		mucse->tx_ring[ring->queue_index] = ring;
+		/* update count and index */
+		txr_idx += step;
+		/* push pointer to next ring */
+		ring++;
+	}
+
+	for (idx = 0; idx < rxr_count; idx++) {
+		/* assign generic ring traits */
+		ring->dev = &mucse->pdev->dev;
+		ring->netdev = mucse->netdev;
+		/* configure backlink on ring */
+		ring->q_vector = q_vector;
+		/* update q_vector Rx values */
+		mucse_add_ring(ring, &q_vector->rx);
+		/* apply Rx specific ring traits */
+		ring->count = mucse->rx_ring_item_count;
+		/* rnpgbe_queue_idx can be changed after */
+		ring->queue_index = eth_queue_idx + idx;
+		ring->rnpgbe_queue_idx = rxr_idx;
+		ring->ring_addr = dma->dma_ring_addr + RING_OFFSET(rxr_idx);
+		ring->dma_int_stat = ring->ring_addr + DMA_INT_STAT;
+		ring->dma_int_mask = ring->ring_addr + DMA_INT_MASK;
+		ring->dma_int_clr = ring->ring_addr + DMA_INT_CLR;
+		ring->device_id = mucse->pdev->device;
+		ring->pfvfnum = hw->pfvfnum;
+
+		ring->ring_flags |= M_RING_NO_TUNNEL_SUPPORT;
+		ring->ring_flags |= M_RING_STAGS_SUPPORT;
+		/* assign ring to mucse */
+		mucse->rx_ring[ring->queue_index] = ring;
+		/* update count and index */
+		rxr_idx += step;
+		/* push pointer to next ring */
+		ring++;
+	}
+
+	q_vector->vector_flags |= M_QVECTOR_FLAG_ITR_FEATURE;
+	q_vector->itr_rx = mucse->rx_usecs;
+
+	return 0;
+}
+
+/**
+ * rnpgbe_free_q_vector - Free memory allocated for specific interrupt vector
+ * @mucse: pointer to private structure
+ * @v_idx: Index of vector to be freed
+ *
+ * This function frees the memory allocated to the q_vector.  In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void rnpgbe_free_q_vector(struct mucse *mucse, int v_idx)
+{
+	struct mucse_q_vector *q_vector = mucse->q_vector[v_idx];
+	struct mucse_ring *ring;
+
+	mucse_for_each_ring(ring, q_vector->tx)
+		mucse->tx_ring[ring->queue_index] = NULL;
+
+	mucse_for_each_ring(ring, q_vector->rx)
+		mucse->rx_ring[ring->queue_index] = NULL;
+
+	mucse->q_vector[v_idx] = NULL;
+	netif_napi_del(&q_vector->napi);
+	kfree_rcu(q_vector, rcu);
+}
+
+/**
+ * rnpgbe_alloc_q_vectors - Allocate memory for interrupt vectors
+ * @mucse: pointer to private structure
+ *
+ * We allocate one q_vector per tx/rx queue pair.  If allocation fails we
+ * return -ENOMEM.
+ **/
+static int rnpgbe_alloc_q_vectors(struct mucse *mucse)
+{
+	int v_idx = mucse->q_vector_off;
+	int ring_idx = 0;
+	int r_remaing = min_t(int, mucse->num_tx_queues,
+			      mucse->num_rx_queues);
+	int ring_step = 1;
+	int err, ring_cnt, v_remaing = mucse->num_q_vectors;
+	int q_vector_nums = 0;
+	int eth_queue_idx = 0;
+
+	/* can support muti rings in one q_vector */
+	for (; r_remaing > 0 && v_remaing > 0; v_remaing--) {
+		ring_cnt = DIV_ROUND_UP(r_remaing, v_remaing);
+		err = rnpgbe_alloc_q_vector(mucse, eth_queue_idx,
+					    v_idx, ring_idx, ring_cnt,
+					    ring_step);
+		if (err)
+			goto err_out;
+		ring_idx += ring_step * ring_cnt;
+		r_remaing -= ring_cnt;
+		v_idx++;
+		q_vector_nums++;
+		eth_queue_idx += ring_cnt;
+	}
+	/* should fix the real used q_vectors_nums */
+	mucse->num_q_vectors = q_vector_nums;
+
+	return 0;
+
+err_out:
+	mucse->num_tx_queues = 0;
+	mucse->num_rx_queues = 0;
+	mucse->num_q_vectors = 0;
+
+	while (v_idx--)
+		rnpgbe_free_q_vector(mucse, v_idx);
+
+	return -ENOMEM;
+}
+
+/**
+ * rnpgbe_cache_ring_rss - Descriptor ring to register mapping for RSS
+ * @mucse: pointer to private structure
+ *
+ * Cache the descriptor ring offsets for RSS to the assigned rings.
+ *
+ **/
+static void rnpgbe_cache_ring_rss(struct mucse *mucse)
+{
+	int i;
+	/* setup here */
+	int ring_step = 1;
+	struct mucse_ring *ring;
+	struct mucse_hw *hw = &mucse->hw;
+	struct mucse_dma_info *dma = &hw->dma;
+
+	/* some ring alloc rules can be added here */
+	for (i = 0; i < mucse->num_rx_queues; i++) {
+		ring = mucse->tx_ring[i];
+		ring->rnpgbe_queue_idx = i * ring_step;
+		ring->ring_addr = dma->dma_ring_addr +
+				  RING_OFFSET(ring->rnpgbe_queue_idx);
+
+		ring->dma_int_stat = ring->ring_addr + DMA_INT_STAT;
+		ring->dma_int_mask = ring->ring_addr + DMA_INT_MASK;
+		ring->dma_int_clr = ring->ring_addr + DMA_INT_CLR;
+	}
+
+	for (i = 0; i < mucse->num_tx_queues; i++) {
+		ring = mucse->rx_ring[i];
+		ring->rnpgbe_queue_idx = i * ring_step;
+		ring->ring_addr = dma->dma_ring_addr +
+				  RING_OFFSET(ring->rnpgbe_queue_idx);
+		ring->dma_int_stat = ring->ring_addr + DMA_INT_STAT;
+		ring->dma_int_mask = ring->ring_addr + DMA_INT_MASK;
+		ring->dma_int_clr = ring->ring_addr + DMA_INT_CLR;
+	}
+}
+
+/**
+ * rnpgbe_cache_ring_register - Descriptor ring to register mapping
+ * @mucse: pointer to private structure
+ *
+ * Reset ring reg here to satisfy feature.
+ **/
+static void rnpgbe_cache_ring_register(struct mucse *mucse)
+{
+	rnpgbe_cache_ring_rss(mucse);
+}
+
+/**
+ * rnpgbe_free_q_vectors - Free memory allocated for interrupt vectors
+ * @mucse: pointer to private structure
+ *
+ * This function frees the memory allocated to the q_vectors.  In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void rnpgbe_free_q_vectors(struct mucse *mucse)
+{
+	int v_idx = mucse->num_q_vectors;
+
+	mucse->num_rx_queues = 0;
+	mucse->num_tx_queues = 0;
+	mucse->num_q_vectors = 0;
+
+	while (v_idx--)
+		rnpgbe_free_q_vector(mucse, v_idx);
+}
+
+static void rnpgbe_reset_interrupt_capability(struct mucse *mucse)
+{
+	if (mucse->flags & M_FLAG_MSIX_ENABLED)
+		pci_disable_msix(mucse->pdev);
+	else if (mucse->flags & M_FLAG_MSI_CAPABLE)
+		pci_disable_msi(mucse->pdev);
+
+	kfree(mucse->msix_entries);
+	mucse->msix_entries = NULL;
+	mucse->q_vector_off = 0;
+	/* clean msix flags */
+	mucse->flags &= (~M_FLAG_MSIX_ENABLED);
+	mucse->flags &= (~M_FLAG_MSI_ENABLED);
+}
+
+/**
+ * rnpgbe_init_interrupt_scheme - Determine proper interrupt scheme
+ * @mucse: pointer to private structure
+ *
+ * We determine which interrupt scheme to use based on...
+ * - Hardware queue count
+ * - cpu numbers
+ * - irq mode (msi/legacy force 1)
+ **/
+int rnpgbe_init_interrupt_scheme(struct mucse *mucse)
+{
+	int err;
+
+	/* Number of supported queues */
+	rnpgbe_set_num_queues(mucse);
+	/* Set interrupt mode */
+	err = rnpgbe_set_interrupt_capability(mucse);
+	if (err)
+		goto err_set_interrupt;
+	/* update ring num only init */
+	update_ring_count(mucse);
+	err = rnpgbe_alloc_q_vectors(mucse);
+	if (err)
+		goto err_alloc_q_vectors;
+	rnpgbe_cache_ring_register(mucse);
+	set_bit(__MUCSE_DOWN, &mucse->state);
+
+	return 0;
+
+err_alloc_q_vectors:
+	rnpgbe_reset_interrupt_capability(mucse);
+err_set_interrupt:
+	return err;
+}
+
+/**
+ * rnpgbe_clear_interrupt_scheme - Clear the current interrupt scheme settings
+ * @mucse: pointer to private structure
+ *
+ * Clear interrupt specific resources and reset the structure
+ **/
+void rnpgbe_clear_interrupt_scheme(struct mucse *mucse)
+{
+	mucse->num_tx_queues = 0;
+	mucse->num_rx_queues = 0;
+	rnpgbe_free_q_vectors(mucse);
+	rnpgbe_reset_interrupt_capability(mucse);
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
new file mode 100644
index 000000000000..ab55c5ae1482
--- /dev/null
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2020 - 2025 Mucse Corporation. */
+
+#ifndef _RNPGBE_LIB_H
+#define _RNPGBE_LIB_H
+
+#include "rnpgbe.h"
+
+#define RING_OFFSET(n) (0x100 * (n))
+#define DMA_RX_START (0x10)
+#define DMA_RX_READY (0x14)
+#define DMA_TX_START (0x18)
+#define DMA_TX_READY (0x1c)
+#define DMA_INT_MASK (0x24)
+#define TX_INT_MASK (0x02)
+#define RX_INT_MASK (0x01)
+#define DMA_INT_CLR (0x28)
+#define DMA_INT_STAT (0x20)
+
+#define mucse_for_each_ring(pos, head)  \
+	for (pos = (head).ring; pos; pos = pos->next)
+
+int rnpgbe_init_interrupt_scheme(struct mucse *mucse);
+void rnpgbe_clear_interrupt_scheme(struct mucse *mucse);
+
+#endif /* _RNPGBE_LIB_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index d99da9838e27..bfe7b34be78e 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -12,6 +12,7 @@
 #include "rnpgbe.h"
 #include "rnpgbe_mbx_fw.h"
 #include "rnpgbe_sfc.h"
+#include "rnpgbe_lib.h"
 
 char rnpgbe_driver_name[] = "rnpgbe";
 static const char rnpgbe_driver_string[] =
@@ -45,6 +46,51 @@ static struct pci_device_id rnpgbe_pci_tbl[] = {
 	{0, },
 };
 
+static struct workqueue_struct *rnpgbe_wq;
+
+void rnpgbe_service_event_schedule(struct mucse *mucse)
+{
+	if (!test_bit(__MUCSE_DOWN, &mucse->state) &&
+	    !test_and_set_bit(__MUCSE_SERVICE_SCHED, &mucse->state))
+		queue_work(rnpgbe_wq, &mucse->service_task);
+}
+
+/**
+ * rnpgbe_service_timer - Timer Call-back
+ * @t: pointer to timer_list
+ **/
+static void rnpgbe_service_timer(struct timer_list *t)
+{
+	struct mucse *mucse = timer_container_of(mucse, t, service_timer);
+	unsigned long next_event_offset;
+	bool ready = true;
+
+	/* poll faster when waiting for link */
+	if (mucse->flags & M_FLAG_NEED_LINK_UPDATE)
+		next_event_offset = HZ / 10;
+	else
+		next_event_offset = HZ;
+	/* Reset the timer */
+	if (!test_bit(__MUCSE_REMOVE, &mucse->state))
+		mod_timer(&mucse->service_timer, next_event_offset + jiffies);
+
+	if (ready)
+		rnpgbe_service_event_schedule(mucse);
+}
+
+/**
+ * rnpgbe_service_task - manages and runs subtasks
+ * @work: pointer to work_struct containing our data
+ **/
+static void rnpgbe_service_task(struct work_struct *work)
+{
+}
+
+int rnpgbe_poll(struct napi_struct *napi, int budget)
+{
+	return 0;
+}
+
 /**
  * rnpgbe_check_fw_from_flash - Check chip-id and bin-id
  * @hw: hardware structure
@@ -169,11 +215,67 @@ static int rnpgbe_sw_init(struct mucse *mucse)
 	/* set default ring sizes */
 	mucse->tx_ring_item_count = M_DEFAULT_TXD;
 	mucse->rx_ring_item_count = M_DEFAULT_RXD;
+	mucse->irq_mode = irq_mode_msix;
+	mucse->max_q_vectors = hw->max_msix_vectors;
+	mucse->num_other_vectors = 1;
 	set_bit(__MUCSE_DOWN, &mucse->state);
 
 	return 0;
 }
 
+static void remove_mbx_irq(struct mucse *mucse)
+{
+	struct mucse_hw *hw = &mucse->hw;
+
+	if (mucse->num_other_vectors == 0)
+		return;
+	/* only msix use indepented intr */
+	if (mucse->flags & M_FLAG_MSIX_ENABLED) {
+		hw->mbx.ops.configure(hw,
+				      mucse->msix_entries[0].entry,
+				      false);
+		if (hw->mbx.irq_enabled) {
+			free_irq(mucse->msix_entries[0].vector, mucse);
+			hw->mbx.irq_enabled = false;
+		}
+	}
+}
+
+static irqreturn_t rnpgbe_msix_other(int irq, void *data)
+{
+	struct mucse *mucse = (struct mucse *)data;
+
+	set_bit(__MUCSE_IN_IRQ, &mucse->state);
+	clear_bit(__MUCSE_IN_IRQ, &mucse->state);
+
+	return IRQ_HANDLED;
+}
+
+static int register_mbx_irq(struct mucse *mucse)
+{
+	struct mucse_hw *hw = &mucse->hw;
+	struct net_device *netdev = mucse->netdev;
+	int err = 0;
+
+	/* for mbx:vector0 */
+	if (mucse->num_other_vectors == 0)
+		return err;
+	/* only do this in msix mode */
+	if (mucse->flags & M_FLAG_MSIX_ENABLED) {
+		err = request_irq(mucse->msix_entries[0].vector,
+				  rnpgbe_msix_other, 0, netdev->name,
+				  mucse);
+		if (err)
+			goto err_mbx;
+		hw->mbx.ops.configure(hw,
+				      mucse->msix_entries[0].entry,
+				      true);
+		hw->mbx.irq_enabled = true;
+	}
+err_mbx:
+	return err;
+}
+
 /**
  * rnpgbe_add_adpater - add netdev for this pci_dev
  * @pdev: PCI device information structure
@@ -260,7 +362,6 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	hw->mbx.ops.init_params(hw);
 	/* echo fw driver insmod */
 	hw->ops.driver_status(hw, true, mucse_driver_insmod);
-
 	if (mucse_mbx_get_capability(hw)) {
 		dev_err(&pdev->dev,
 			"mucse_mbx_get_capability failed!\n");
@@ -286,9 +387,20 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	eth_hw_addr_set(netdev, hw->perm_addr);
 	strscpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
 	memcpy(netdev->perm_addr, hw->perm_addr, netdev->addr_len);
+	ether_addr_copy(hw->addr, hw->perm_addr);
+	timer_setup(&mucse->service_timer, rnpgbe_service_timer, 0);
+	INIT_WORK(&mucse->service_task, rnpgbe_service_task);
+	clear_bit(__MUCSE_SERVICE_SCHED, &mucse->state);
+	err = rnpgbe_init_interrupt_scheme(mucse);
+	if (err)
+		goto err_free_net;
+	err = register_mbx_irq(mucse);
+	if (err)
+		goto err_free_irq;
 
 	return 0;
-
+err_free_irq:
+	rnpgbe_clear_interrupt_scheme(mucse);
 err_free_net:
 	free_netdev(netdev);
 	return err;
@@ -357,7 +469,11 @@ static void rnpgbe_rm_adpater(struct mucse *mucse)
 
 	netdev = mucse->netdev;
 	pr_info("= remove rnpgbe:%s =\n", netdev->name);
+	cancel_work_sync(&mucse->service_task);
+	timer_delete_sync(&mucse->service_timer);
 	hw->ops.driver_status(hw, false, mucse_driver_insmod);
+	remove_mbx_irq(mucse);
+	rnpgbe_clear_interrupt_scheme(mucse);
 	free_netdev(netdev);
 	mucse->netdev = NULL;
 	pr_info("remove complete\n");
@@ -391,6 +507,8 @@ static void __rnpgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 
 	*enable_wake = false;
 	netif_device_detach(netdev);
+	remove_mbx_irq(mucse);
+	rnpgbe_clear_interrupt_scheme(mucse);
 	pci_disable_device(pdev);
 }
 
@@ -428,6 +546,12 @@ static int __init rnpgbe_init_module(void)
 	pr_info("%s - version %s\n", rnpgbe_driver_string,
 		rnpgbe_driver_version);
 	pr_info("%s\n", rnpgbe_copyright);
+	rnpgbe_wq = create_singlethread_workqueue(rnpgbe_driver_name);
+	if (!rnpgbe_wq) {
+		pr_err("%s: Failed to create workqueue\n", rnpgbe_driver_name);
+		return -ENOMEM;
+	}
+
 	ret = pci_register_driver(&rnpgbe_driver);
 	if (ret)
 		return ret;
@@ -440,6 +564,7 @@ module_init(rnpgbe_init_module);
 static void __exit rnpgbe_exit_module(void)
 {
 	pci_unregister_driver(&rnpgbe_driver);
+	destroy_workqueue(rnpgbe_wq);
 }
 
 module_exit(rnpgbe_exit_module);
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
index a9c5caa764a0..f86fb81f4db4 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
@@ -231,7 +231,7 @@ int rnpgbe_mbx_lldp_get(struct mucse_hw *hw)
 	memset(&req, 0, sizeof(req));
 	memset(&reply, 0, sizeof(reply));
 	build_get_lldp_req(&req, cookie, hw->nr_lane);
-	if (hw->mbx.other_irq_enabled) {
+	if (hw->mbx.irq_enabled) {
 		err = mucse_mbx_fw_post_req(hw, &req, cookie);
 	} else {
 		err = mucse_fw_send_cmd_wait(hw, &req, &reply);
@@ -332,7 +332,7 @@ int mucse_mbx_fw_reset_phy(struct mucse_hw *hw)
 
 	memset(&req, 0, sizeof(req));
 	memset(&reply, 0, sizeof(reply));
-	if (hw->mbx.other_irq_enabled) {
+	if (hw->mbx.irq_enabled) {
 		struct mbx_req_cookie *cookie = mbx_cookie_zalloc(0);
 
 		if (!cookie)
@@ -376,7 +376,7 @@ int mucse_fw_get_macaddr(struct mucse_hw *hw, int pfvfnum,
 	if (!mac_addr)
 		return -EINVAL;
 
-	if (hw->mbx.other_irq_enabled) {
+	if (hw->mbx.irq_enabled) {
 		struct mbx_req_cookie *cookie =
 			mbx_cookie_zalloc(sizeof(reply.mac_addr));
 		struct mac_addr *mac = (struct mac_addr *)cookie->priv;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 09/15] net: rnpgbe: Add netdev register and init tx/rx memory
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (7 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 08/15] net: rnpgbe: Add irq support Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 10/15] net: rnpgbe: Add netdev irq in open Dong Yibo
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize tx/rx memory for tx/rx desc.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    | 145 +++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c    | 358 ++++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h    |   2 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  84 +++-
 4 files changed, 586 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 82df7f133f10..feb74048b9e0 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -247,6 +247,7 @@ struct mucse_hw {
 };
 
 enum mucse_state_t {
+	__MMUCSE_TESTING,
 	__MUCSE_DOWN,
 	__MUCSE_SERVICE_SCHED,
 	__MUCSE_PTP_TX_IN_PROGRESS,
@@ -306,6 +307,134 @@ struct mucse_rx_queue_stats {
 	u64 rx_clean_count;
 };
 
+union rnpgbe_rx_desc {
+	struct {
+		union {
+			__le64 pkt_addr;
+			struct {
+				__le32 addr_lo;
+				__le32 addr_hi;
+			};
+		};
+		__le64 resv_cmd;
+#define M_RXD_FLAG_RS (0)
+	};
+	struct {
+		__le32 rss_hash;
+		__le16 mark;
+		__le16 rev1;
+#define M_RX_L3_TYPE_MASK BIT(15)
+#define VEB_VF_PKG BIT(1)
+#define VEB_VF_IGNORE_VLAN BIT(0)
+#define REV_OUTER_VLAN BIT(5)
+		__le16 len;
+		__le16 padding_len;
+		__le16 vlan;
+		__le16 cmd;
+#define M_RXD_STAT_VLAN_VALID BIT(15)
+#define M_RXD_STAT_STAG BIT(14)
+#define M_RXD_STAT_TUNNEL_NVGRE (0x02 << 13)
+#define M_RXD_STAT_TUNNEL_VXLAN (0x01 << 13)
+#define M_RXD_STAT_TUNNEL_MASK (0x03 << 13)
+#define M_RXD_STAT_ERR_MASK (0x1f << 8)
+#define M_RXD_STAT_SCTP_MASK (0x04 << 8)
+#define M_RXD_STAT_L4_MASK (0x02 << 8)
+#define M_RXD_STAT_L4_SCTP (0x02 << 6)
+#define M_RXD_STAT_L4_TCP (0x01 << 6)
+#define M_RXD_STAT_L4_UDP (0x03 << 6)
+#define M_RXD_STAT_IPV6 BIT(5)
+#define M_RXD_STAT_IPV4 (0 << 5)
+#define M_RXD_STAT_PTP BIT(4)
+#define M_RXD_STAT_DD BIT(1)
+#define M_RXD_STAT_EOP BIT(0)
+	} wb;
+} __packed;
+
+struct rnpgbe_tx_desc {
+	union {
+		__le64 pkt_addr;
+		struct {
+			__le32 adr_lo;
+			__le32 adr_hi;
+		};
+	};
+	union {
+		__le64 vlan_cmd_bsz;
+		struct {
+			__le32 blen_mac_ip_len;
+			__le32 vlan_cmd;
+		};
+	};
+#define M_TXD_FLAGS_VLAN_PRIO_MASK 0xe000
+#define M_TX_FLAGS_VLAN_PRIO_SHIFT 13
+#define M_TX_FLAGS_VLAN_CFI_SHIFT 12
+#define M_TXD_VLAN_VALID (0x80000000)
+#define M_TXD_SVLAN_TYPE (0x02000000)
+#define M_TXD_VLAN_CTRL_NOP (0x00 << 13)
+#define M_TXD_VLAN_CTRL_RM_VLAN (0x20000000)
+#define M_TXD_VLAN_CTRL_INSERT_VLAN (0x40000000)
+#define M_TXD_L4_CSUM (0x10000000)
+#define M_TXD_IP_CSUM (0x8000000)
+#define M_TXD_TUNNEL_MASK (0x3000000)
+#define M_TXD_TUNNEL_VXLAN (0x1000000)
+#define M_TXD_TUNNEL_NVGRE (0x2000000)
+#define M_TXD_L4_TYPE_UDP (0xc00000)
+#define M_TXD_L4_TYPE_TCP (0x400000)
+#define M_TXD_L4_TYPE_SCTP (0x800000)
+#define M_TXD_FLAG_IPv4 (0)
+#define M_TXD_FLAG_IPv6 (0x200000)
+#define M_TXD_FLAG_TSO (0x100000)
+#define M_TXD_FLAG_PTP (0x4000000)
+#define M_TXD_CMD_RS (0x040000)
+#define M_TXD_CMD_INNER_VLAN (0x08000000)
+#define M_TXD_STAT_DD (0x020000)
+#define M_TXD_CMD_EOP (0x010000)
+#define M_TXD_PAD_CTRL (0x01000000)
+};
+
+struct mucse_tx_buffer {
+	struct rnpgbe_tx_desc *next_to_watch;
+	unsigned long time_stamp;
+	struct sk_buff *skb;
+	unsigned int bytecount;
+	unsigned short gso_segs;
+	bool gso_need_padding;
+	__be16 protocol;
+	__be16 priv_tags;
+	DEFINE_DMA_UNMAP_ADDR(dma);
+	DEFINE_DMA_UNMAP_LEN(len);
+	union {
+		u32 mss_len_vf_num;
+		struct {
+			__le16 mss_len;
+			u8 vf_num;
+			u8 l4_hdr_len;
+		};
+	};
+	union {
+		u32 inner_vlan_tunnel_len;
+		struct {
+			u8 tunnel_hdr_len;
+			u8 inner_vlan_l;
+			u8 inner_vlan_h;
+			u8 resv;
+		};
+	};
+	bool ctx_flag;
+};
+
+struct mucse_rx_buffer {
+	struct sk_buff *skb;
+	dma_addr_t dma;
+	struct page *page;
+#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536)
+	__u32 page_offset;
+#else /* (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) */
+	__u16 page_offset;
+#endif /* (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) */
+	__u16 pagecnt_bias;
+};
+
 struct mucse_ring {
 	struct mucse_ring *next;
 	struct mucse_q_vector *q_vector;
@@ -349,6 +478,7 @@ struct mucse_ring {
 	u16 next_to_use;
 	u16 next_to_clean;
 	u16 device_id;
+	u16 next_to_alloc;
 	struct mucse_queue_stats stats;
 	struct u64_stats_sync syncp;
 	union {
@@ -434,6 +564,21 @@ struct rnpgbe_info {
 	void (*get_invariants)(struct mucse_hw *hw);
 };
 
+static inline struct netdev_queue *txring_txq(const struct mucse_ring *ring)
+{
+	return netdev_get_tx_queue(ring->netdev, ring->queue_index);
+}
+
+#define M_RXBUFFER_1536 (1536)
+static inline unsigned int mucse_rx_bufsz(struct mucse_ring *ring)
+{
+	return (M_RXBUFFER_1536 - NET_IP_ALIGN);
+}
+
+#define M_TX_DESC(R, i) (&(((struct rnpgbe_tx_desc *)((R)->desc))[i]))
+#define M_RX_DESC(R, i) (&(((union rnpgbe_rx_desc *)((R)->desc))[i]))
+
+#define M_RX_DMA_ATTR (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
 /* Device IDs */
 #ifndef PCI_VENDOR_ID_MUCSE
 #define PCI_VENDOR_ID_MUCSE 0x8848
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
index 95c913212182..0dbb942eb4c7 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
@@ -1,6 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright(c) 2020 - 2025 Mucse Corporation. */
 
+#include <linux/vmalloc.h>
+
 #include "rnpgbe.h"
 #include "rnpgbe_lib.h"
 
@@ -460,3 +462,359 @@ void rnpgbe_clear_interrupt_scheme(struct mucse *mucse)
 	rnpgbe_free_q_vectors(mucse);
 	rnpgbe_reset_interrupt_capability(mucse);
 }
+
+/**
+ * rnpgbe_clean_tx_ring - Free Tx Buffers
+ * @tx_ring: ring to be cleaned
+ **/
+static void rnpgbe_clean_tx_ring(struct mucse_ring *tx_ring)
+{
+	unsigned long size;
+	u16 i = tx_ring->next_to_clean;
+	struct mucse_tx_buffer *tx_buffer = &tx_ring->tx_buffer_info[i];
+
+	/* ring already cleared, nothing to do */
+	if (!tx_ring->tx_buffer_info)
+		return;
+
+	while (i != tx_ring->next_to_use) {
+		struct rnpgbe_tx_desc *eop_desc, *tx_desc;
+
+		dev_kfree_skb_any(tx_buffer->skb);
+		/* unmap skb header data */
+		dma_unmap_single(tx_ring->dev, dma_unmap_addr(tx_buffer, dma),
+				 dma_unmap_len(tx_buffer, len), DMA_TO_DEVICE);
+		eop_desc = tx_buffer->next_to_watch;
+		tx_desc = M_TX_DESC(tx_ring, i);
+		/* unmap remaining buffers */
+		while (tx_desc != eop_desc) {
+			tx_buffer++;
+			tx_desc++;
+			i++;
+			if (unlikely(i == tx_ring->count)) {
+				i = 0;
+				tx_buffer = tx_ring->tx_buffer_info;
+				tx_desc = M_TX_DESC(tx_ring, 0);
+			}
+
+			/* unmap any remaining paged data */
+			if (dma_unmap_len(tx_buffer, len))
+				dma_unmap_page(tx_ring->dev,
+					       dma_unmap_addr(tx_buffer, dma),
+					       dma_unmap_len(tx_buffer, len),
+					       DMA_TO_DEVICE);
+		}
+		/* move us one more past the eop_desc for start of next pkt */
+		tx_buffer++;
+		i++;
+		if (unlikely(i == tx_ring->count)) {
+			i = 0;
+			tx_buffer = tx_ring->tx_buffer_info;
+		}
+	}
+
+	netdev_tx_reset_queue(txring_txq(tx_ring));
+	size = sizeof(struct mucse_tx_buffer) * tx_ring->count;
+	memset(tx_ring->tx_buffer_info, 0, size);
+	/* Zero out the descriptor ring */
+	memset(tx_ring->desc, 0, tx_ring->size);
+	tx_ring->next_to_use = 0;
+	tx_ring->next_to_clean = 0;
+}
+
+/**
+ * rnpgbe_free_tx_resources - Free Tx Resources per Queue
+ * @tx_ring: tx descriptor ring for a specific queue
+ *
+ * Free all transmit software resources
+ **/
+static void rnpgbe_free_tx_resources(struct mucse_ring *tx_ring)
+{
+	rnpgbe_clean_tx_ring(tx_ring);
+	vfree(tx_ring->tx_buffer_info);
+	tx_ring->tx_buffer_info = NULL;
+	/* if not set, then don't free */
+	if (!tx_ring->desc)
+		return;
+
+	dma_free_coherent(tx_ring->dev, tx_ring->size, tx_ring->desc,
+			  tx_ring->dma);
+	tx_ring->desc = NULL;
+}
+
+/**
+ * rnpgbe_setup_tx_resources - allocate Tx resources (Descriptors)
+ * @tx_ring: tx descriptor ring (for a specific queue) to setup
+ * @mucse: pointer to private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int rnpgbe_setup_tx_resources(struct mucse_ring *tx_ring,
+				     struct mucse *mucse)
+{
+	struct device *dev = tx_ring->dev;
+	int orig_node = dev_to_node(dev);
+	int numa_node = NUMA_NO_NODE;
+	int size;
+
+	size = sizeof(struct mucse_tx_buffer) * tx_ring->count;
+
+	if (tx_ring->q_vector)
+		numa_node = tx_ring->q_vector->numa_node;
+	tx_ring->tx_buffer_info = vzalloc_node(size, numa_node);
+	if (!tx_ring->tx_buffer_info)
+		tx_ring->tx_buffer_info = vzalloc(size);
+	if (!tx_ring->tx_buffer_info)
+		goto err;
+	/* round up to nearest 4K */
+	tx_ring->size = tx_ring->count * sizeof(struct rnpgbe_tx_desc);
+	tx_ring->size = ALIGN(tx_ring->size, 4096);
+	set_dev_node(dev, numa_node);
+	tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size, &tx_ring->dma,
+					   GFP_KERNEL);
+	set_dev_node(dev, orig_node);
+	if (!tx_ring->desc)
+		tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size,
+						   &tx_ring->dma,
+						   GFP_KERNEL);
+	if (!tx_ring->desc)
+		goto err;
+
+	memset(tx_ring->desc, 0, tx_ring->size);
+	tx_ring->next_to_use = 0;
+	tx_ring->next_to_clean = 0;
+	return 0;
+
+err:
+	vfree(tx_ring->tx_buffer_info);
+	tx_ring->tx_buffer_info = NULL;
+	return -ENOMEM;
+}
+
+/**
+ * rnpgbe_setup_all_tx_resources - allocate all queues Tx resources
+ * @mucse: pointer to private structure
+ *
+ * Allocate memory for tx_ring.
+ * Return 0 on success, negative on failure
+ **/
+static int rnpgbe_setup_all_tx_resources(struct mucse *mucse)
+{
+	int i, err = 0;
+
+	for (i = 0; i < (mucse->num_tx_queues); i++) {
+		err = rnpgbe_setup_tx_resources(mucse->tx_ring[i], mucse);
+		if (!err)
+			continue;
+
+		goto err_setup_tx;
+	}
+
+	return 0;
+err_setup_tx:
+	while (i--)
+		rnpgbe_free_tx_resources(mucse->tx_ring[i]);
+	return err;
+}
+
+/**
+ * rnpgbe_free_all_tx_resources - Free Tx Resources for All Queues
+ * @mucse: pointer to private structure
+ *
+ * Free all transmit software resources
+ **/
+static void rnpgbe_free_all_tx_resources(struct mucse *mucse)
+{
+	int i;
+
+	for (i = 0; i < (mucse->num_tx_queues); i++)
+		rnpgbe_free_tx_resources(mucse->tx_ring[i]);
+}
+
+/**
+ * rnpgbe_setup_rx_resources - allocate Rx resources (Descriptors)
+ * @rx_ring:    rx descriptor ring (for a specific queue) to setup
+ * @mucse: pointer to private structure
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int rnpgbe_setup_rx_resources(struct mucse_ring *rx_ring,
+				     struct mucse *mucse)
+{
+	struct device *dev = rx_ring->dev;
+	int orig_node = dev_to_node(dev);
+	int numa_node = NUMA_NO_NODE;
+	int size;
+
+	size = sizeof(struct mucse_rx_buffer) * rx_ring->count;
+	if (rx_ring->q_vector)
+		numa_node = rx_ring->q_vector->numa_node;
+
+	rx_ring->rx_buffer_info = vzalloc_node(size, numa_node);
+	if (!rx_ring->rx_buffer_info)
+		rx_ring->rx_buffer_info = vzalloc(size);
+
+	if (!rx_ring->rx_buffer_info)
+		goto err;
+	/* Round up to nearest 4K */
+	rx_ring->size = rx_ring->count * sizeof(union rnpgbe_rx_desc);
+	rx_ring->size = ALIGN(rx_ring->size, 4096);
+	set_dev_node(dev, numa_node);
+	rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
+					   &rx_ring->dma,
+					   GFP_KERNEL);
+	set_dev_node(dev, orig_node);
+	if (!rx_ring->desc)
+		rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
+						   &rx_ring->dma,
+						   GFP_KERNEL);
+	if (!rx_ring->desc)
+		goto err;
+	memset(rx_ring->desc, 0, rx_ring->size);
+	rx_ring->next_to_clean = 0;
+	rx_ring->next_to_use = 0;
+
+	return 0;
+err:
+	vfree(rx_ring->rx_buffer_info);
+	rx_ring->rx_buffer_info = NULL;
+	return -ENOMEM;
+}
+
+/**
+ * rnpgbe_clean_rx_ring - Free Rx Buffers per Queue
+ * @rx_ring: ring to free buffers from
+ **/
+static void rnpgbe_clean_rx_ring(struct mucse_ring *rx_ring)
+{
+	u16 i = rx_ring->next_to_clean;
+	struct mucse_rx_buffer *rx_buffer = &rx_ring->rx_buffer_info[i];
+
+	/* Free all the Rx ring sk_buffs */
+	while (i != rx_ring->next_to_alloc) {
+		if (rx_buffer->skb) {
+			struct sk_buff *skb = rx_buffer->skb;
+
+			dev_kfree_skb(skb);
+			rx_buffer->skb = NULL;
+		}
+		dma_sync_single_range_for_cpu(rx_ring->dev, rx_buffer->dma,
+					      rx_buffer->page_offset,
+					      mucse_rx_bufsz(rx_ring),
+					      DMA_FROM_DEVICE);
+		/* free resources associated with mapping */
+		dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,
+				     PAGE_SIZE,
+				     DMA_FROM_DEVICE,
+				     M_RX_DMA_ATTR);
+		__page_frag_cache_drain(rx_buffer->page,
+					rx_buffer->pagecnt_bias);
+		rx_buffer->page = NULL;
+		i++;
+		rx_buffer++;
+		if (i == rx_ring->count) {
+			i = 0;
+			rx_buffer = rx_ring->rx_buffer_info;
+		}
+	}
+
+	rx_ring->next_to_alloc = 0;
+	rx_ring->next_to_clean = 0;
+	rx_ring->next_to_use = 0;
+}
+
+/**
+ * rnpgbe_free_rx_resources - Free Rx Resources
+ * @rx_ring: ring to clean the resources from
+ *
+ * Free all receive software resources
+ **/
+static void rnpgbe_free_rx_resources(struct mucse_ring *rx_ring)
+{
+	rnpgbe_clean_rx_ring(rx_ring);
+	vfree(rx_ring->rx_buffer_info);
+	rx_ring->rx_buffer_info = NULL;
+	/* if not set, then don't free */
+	if (!rx_ring->desc)
+		return;
+
+	dma_free_coherent(rx_ring->dev, rx_ring->size, rx_ring->desc,
+			  rx_ring->dma);
+	rx_ring->desc = NULL;
+}
+
+/**
+ * rnpgbe_setup_all_rx_resources - allocate all queues Rx resources
+ * @mucse: pointer to private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int rnpgbe_setup_all_rx_resources(struct mucse *mucse)
+{
+	int i, err = 0;
+
+	for (i = 0; i < mucse->num_rx_queues; i++) {
+		err = rnpgbe_setup_rx_resources(mucse->rx_ring[i], mucse);
+		if (!err)
+			continue;
+
+		goto err_setup_rx;
+	}
+
+	return 0;
+err_setup_rx:
+	while (i--)
+		rnpgbe_free_rx_resources(mucse->rx_ring[i]);
+	return err;
+}
+
+/**
+ * rnpgbe_free_all_rx_resources - Free Rx Resources for All Queues
+ * @mucse: pointer to private structure
+ *
+ * Free all receive software resources
+ **/
+static void rnpgbe_free_all_rx_resources(struct mucse *mucse)
+{
+	int i;
+
+	for (i = 0; i < (mucse->num_rx_queues); i++) {
+		if (mucse->rx_ring[i]->desc)
+			rnpgbe_free_rx_resources(mucse->rx_ring[i]);
+	}
+}
+
+/**
+ * rnpgbe_setup_txrx - Allocate Tx/Rx Resources for All Queues
+ * @mucse: pointer to private structure
+ *
+ * Allocate all send/receive software resources
+ **/
+int rnpgbe_setup_txrx(struct mucse *mucse)
+{
+	int err;
+
+	err = rnpgbe_setup_all_tx_resources(mucse);
+	if (err)
+		return err;
+
+	err = rnpgbe_setup_all_rx_resources(mucse);
+	if (err)
+		goto err_setup_rx;
+	return 0;
+err_setup_rx:
+	rnpgbe_free_all_tx_resources(mucse);
+	return err;
+}
+
+/**
+ * rnpgbe_free_txrx - Clean Tx/Rx Resources for All Queues
+ * @mucse: pointer to private structure
+ *
+ * Free all send/receive software resources
+ **/
+void rnpgbe_free_txrx(struct mucse *mucse)
+{
+	rnpgbe_free_all_tx_resources(mucse);
+	rnpgbe_free_all_rx_resources(mucse);
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
index ab55c5ae1482..150d03f9ada9 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
@@ -22,5 +22,7 @@
 
 int rnpgbe_init_interrupt_scheme(struct mucse *mucse);
 void rnpgbe_clear_interrupt_scheme(struct mucse *mucse);
+int rnpgbe_setup_txrx(struct mucse *mucse);
+void rnpgbe_free_txrx(struct mucse *mucse);
 
 #endif /* _RNPGBE_LIB_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index bfe7b34be78e..95a68b6d08a5 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -8,6 +8,7 @@
 #include <linux/string.h>
 #include <linux/etherdevice.h>
 #include <linux/firmware.h>
+#include <linux/rtnetlink.h>
 
 #include "rnpgbe.h"
 #include "rnpgbe_mbx_fw.h"
@@ -194,6 +195,67 @@ static int init_firmware_for_n210(struct mucse_hw *hw)
 	return err;
 }
 
+/**
+ * rnpgbe_open - Called when a network interface is made active
+ * @netdev: network interface device structure
+ *
+ * Returns 0 on success, negative value on failure
+ *
+ * The open entry point is called when a network interface is made
+ * active by the system (IFF_UP).
+ **/
+static int rnpgbe_open(struct net_device *netdev)
+{
+	struct mucse *mucse = netdev_priv(netdev);
+	int err;
+
+	/* disallow open during test */
+	if (test_bit(__MMUCSE_TESTING, &mucse->state))
+		return -EBUSY;
+
+	netif_carrier_off(netdev);
+	err = rnpgbe_setup_txrx(mucse);
+
+	return err;
+}
+
+/**
+ * rnpgbe_close - Disables a network interface
+ * @netdev: network interface device structure
+ *
+ * Returns 0, this is not allowed to fail
+ *
+ * The close entry point is called when an interface is de-activated
+ * by the OS.
+ **/
+static int rnpgbe_close(struct net_device *netdev)
+{
+	struct mucse *mucse = netdev_priv(netdev);
+
+	rnpgbe_free_txrx(mucse);
+
+	return 0;
+}
+
+static netdev_tx_t rnpgbe_xmit_frame(struct sk_buff *skb,
+				     struct net_device *netdev)
+{
+	dev_kfree_skb_any(skb);
+	return NETDEV_TX_OK;
+}
+
+const struct net_device_ops rnpgbe_netdev_ops = {
+	.ndo_open = rnpgbe_open,
+	.ndo_stop = rnpgbe_close,
+	.ndo_start_xmit = rnpgbe_xmit_frame,
+};
+
+static void rnpgbe_assign_netdev_ops(struct net_device *dev)
+{
+	dev->netdev_ops = &rnpgbe_netdev_ops;
+	dev->watchdog_timeo = 5 * HZ;
+}
+
 static int rnpgbe_sw_init(struct mucse *mucse)
 {
 	struct mucse_hw *hw = &mucse->hw;
@@ -368,7 +430,7 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 		err = -EIO;
 		goto err_free_net;
 	}
-
+	rnpgbe_assign_netdev_ops(netdev);
 	err = rnpgbe_sw_init(mucse);
 	if (err)
 		goto err_free_net;
@@ -384,8 +446,8 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	netdev->features |= NETIF_F_HIGHDMA;
 	netdev->priv_flags |= IFF_UNICAST_FLT;
 	netdev->priv_flags |= IFF_SUPP_NOFCS;
+	netdev->hw_features |= netdev->features;
 	eth_hw_addr_set(netdev, hw->perm_addr);
-	strscpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
 	memcpy(netdev->perm_addr, hw->perm_addr, netdev->addr_len);
 	ether_addr_copy(hw->addr, hw->perm_addr);
 	timer_setup(&mucse->service_timer, rnpgbe_service_timer, 0);
@@ -394,11 +456,17 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	err = rnpgbe_init_interrupt_scheme(mucse);
 	if (err)
 		goto err_free_net;
+
 	err = register_mbx_irq(mucse);
 	if (err)
 		goto err_free_irq;
-
+	strscpy(netdev->name, "eth%d", sizeof(netdev->name));
+	err = register_netdev(netdev);
+	if (err)
+		goto err_register;
 	return 0;
+err_register:
+	remove_mbx_irq(mucse);
 err_free_irq:
 	rnpgbe_clear_interrupt_scheme(mucse);
 err_free_net:
@@ -468,9 +536,15 @@ static void rnpgbe_rm_adpater(struct mucse *mucse)
 	struct mucse_hw *hw = &mucse->hw;
 
 	netdev = mucse->netdev;
+	if (mucse->flags2 & M_FLAG2_NO_NET_REG) {
+		free_netdev(netdev);
+		return;
+	}
 	pr_info("= remove rnpgbe:%s =\n", netdev->name);
 	cancel_work_sync(&mucse->service_task);
 	timer_delete_sync(&mucse->service_timer);
+	if (netdev->reg_state == NETREG_REGISTERED)
+		unregister_netdev(netdev);
 	hw->ops.driver_status(hw, false, mucse_driver_insmod);
 	remove_mbx_irq(mucse);
 	rnpgbe_clear_interrupt_scheme(mucse);
@@ -507,6 +581,10 @@ static void __rnpgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 
 	*enable_wake = false;
 	netif_device_detach(netdev);
+	rtnl_lock();
+	if (netif_running(netdev))
+		rnpgbe_free_txrx(mucse);
+	rtnl_unlock();
 	remove_mbx_irq(mucse);
 	rnpgbe_clear_interrupt_scheme(mucse);
 	pci_disable_device(pdev);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 10/15] net: rnpgbe: Add netdev irq in open
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (8 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 09/15] net: rnpgbe: Add netdev register and init tx/rx memory Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 11/15] net: rnpgbe: Add setup hw ring-vector, true up/down hw Dong Yibo
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize irq for tx/rx in open func.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  14 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   |  81 +++++
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h |  11 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c    | 307 ++++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h    |  30 ++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  30 +-
 6 files changed, 471 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index feb74048b9e0..d4e150c14582 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -66,7 +66,14 @@ struct mii_regs {
 	unsigned int clk_csr_mask;
 };
 
+struct mucse_mac_info;
+
+struct mucse_mac_operations {
+	void (*set_mac)(struct mucse_mac_info *mac, u8 *addr, int index);
+};
+
 struct mucse_mac_info {
+	struct mucse_mac_operations ops;
 	u8 __iomem *mac_addr;
 	void *back;
 	struct mii_regs mii;
@@ -173,6 +180,9 @@ struct mucse_hw_operations {
 	void (*init_rx_addrs)(struct mucse_hw *hw);
 	/* ops to fw */
 	void (*driver_status)(struct mucse_hw *hw, bool enable, int mode);
+	void (*update_hw_info)(struct mucse_hw *hw);
+	void (*set_mac)(struct mucse_hw *hw, u8 *mac);
+	void (*set_irq_mode)(struct mucse_hw *hw, bool legacy);
 };
 
 enum {
@@ -606,6 +616,10 @@ static inline unsigned int mucse_rx_bufsz(struct mucse_ring *ring)
 #define dma_rd32(dma, reg) m_rd_reg((dma)->dma_base_addr + (reg))
 #define eth_wr32(eth, reg, val) m_wr_reg((eth)->eth_base_addr + (reg), (val))
 #define eth_rd32(eth, reg) m_rd_reg((eth)->eth_base_addr + (reg))
+#define mac_wr32(mac, reg, val) m_wr_reg((mac)->mac_addr + (reg), (val))
+#define mac_rd32(mac, reg) m_rd_reg((mac)->mac_addr + (reg))
+#define ring_wr32(eth, reg, val) m_wr_reg((eth)->ring_addr + (reg), (val))
+#define ring_rd32(eth, reg) m_rd_reg((eth)->ring_addr + (reg))
 
 #define mucse_err(mucse, fmt, arg...) \
 	dev_err(&(mucse)->pdev->dev, fmt, ##arg)
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
index e94a432dd7b6..5ad287e398a7 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -71,6 +71,12 @@ static s32 rnpgbe_eth_clear_rar_n500(struct mucse_eth_info *eth,
 	return 0;
 }
 
+/**
+ * rnpgbe_eth_clr_mc_addr_n500 - Clear multicast register
+ * @eth: pointer to eth structure
+ *
+ * Clears all multicast address register.
+ **/
 static void rnpgbe_eth_clr_mc_addr_n500(struct mucse_eth_info *eth)
 {
 	int i;
@@ -85,6 +91,30 @@ static struct mucse_eth_operations eth_ops_n500 = {
 	.clr_mc_addr = &rnpgbe_eth_clr_mc_addr_n500
 };
 
+/**
+ * rnpgbe_mac_set_mac_n500 - Setup mac address to mac module in hw
+ * @mac: pointer to mac structure
+ * @addr: pointer to addr
+ * @index: Receive address register to write
+ *
+ * Setup a mac address to mac module.
+ **/
+static void rnpgbe_mac_set_mac_n500(struct mucse_mac_info *mac,
+				    u8 *addr, int index)
+{
+	u32 rar_low, rar_high = 0;
+
+	rar_low = ((u32)addr[0] | ((u32)addr[1] << 8) |
+		  ((u32)addr[2] << 16) | ((u32)addr[3] << 24));
+	rar_high = M_RAH_AV | ((u32)addr[4] | (u32)addr[5] << 8);
+	mac_wr32(mac, RNPGBE_MAC_UNICAST_HIGH(index), rar_high);
+	mac_wr32(mac, RNPGBE_MAC_UNICAST_LOW(index), rar_low);
+}
+
+static struct mucse_mac_operations mac_ops_n500 = {
+	.set_mac = &rnpgbe_mac_set_mac_n500,
+};
+
 static int rnpgbe_init_hw_ops_n500(struct mucse_hw *hw)
 {
 	int status = 0;
@@ -211,12 +241,62 @@ static void rnpgbe_init_rx_addrs_hw_ops_n500(struct mucse_hw *hw)
 	eth->ops.clr_mc_addr(eth);
 }
 
+/**
+ * rnpgbe_set_mac_hw_ops_n500 - Setup mac address to hw
+ * @hw: pointer to hw structure
+ * @mac: pointer to mac addr
+ *
+ * Setup a mac address to hw.
+ **/
+static void rnpgbe_set_mac_hw_ops_n500(struct mucse_hw *hw, u8 *mac)
+{
+	struct mucse_eth_info *eth = &hw->eth;
+	struct mucse_mac_info *mac_info = &hw->mac;
+
+	/* use idx 0 */
+	eth->ops.set_rar(eth, 0, mac);
+	mac_info->ops.set_mac(mac_info, mac, 0);
+}
+
+static void rnpgbe_update_hw_info_hw_ops_n500(struct mucse_hw *hw)
+{
+	struct mucse_dma_info *dma = &hw->dma;
+	struct mucse_eth_info *eth = &hw->eth;
+
+	/* 1 enable eth filter */
+	eth_wr32(eth, RNPGBE_HOST_FILTER_EN, 1);
+	/* 2 open redir en */
+	eth_wr32(eth, RNPGBE_REDIR_EN, 1);
+	/* 3 setup tso fifo */
+	dma_wr32(dma, DMA_PKT_FIFO_DATA_PROG_FULL_THRESH, 36);
+}
+
+/**
+ * rnpgbe_set_irq_mode_n500 - Setup hw irq mode
+ * @hw: pointer to hw structure
+ * @legacy: is legacy irq or not
+ *
+ * Setup irq mode to hw.
+ **/
+static void rnpgbe_set_irq_mode_n500(struct mucse_hw *hw, bool legacy)
+{
+	if (legacy) {
+		hw_wr32(hw, RNPGBE_LEGANCY_ENABLE, 1);
+		hw_wr32(hw, RNPGBE_LEGANCY_TIME, 0x200);
+	} else {
+		hw_wr32(hw, RNPGBE_LEGANCY_ENABLE, 1);
+	}
+}
+
 static struct mucse_hw_operations hw_ops_n500 = {
 	.init_hw = &rnpgbe_init_hw_ops_n500,
 	.reset_hw = &rnpgbe_reset_hw_ops_n500,
 	.start_hw = &rnpgbe_start_hw_ops_n500,
 	.init_rx_addrs = &rnpgbe_init_rx_addrs_hw_ops_n500,
 	.driver_status = &rnpgbe_driver_status_hw_ops_n500,
+	.set_mac = &rnpgbe_set_mac_hw_ops_n500,
+	.update_hw_info = &rnpgbe_update_hw_info_hw_ops_n500,
+	.set_irq_mode = &rnpgbe_set_irq_mode_n500,
 };
 
 /**
@@ -252,6 +332,7 @@ static void rnpgbe_get_invariants_n500(struct mucse_hw *hw)
 	eth->vft_size = RNPGBE_VFT_TBL_SIZE;
 	eth->num_rar_entries = RNPGBE_RAR_ENTRIES;
 	/* setup mac info */
+	memcpy(&hw->mac.ops, &mac_ops_n500, sizeof(hw->mac.ops));
 	mac->mac_addr = hw->hw_addr + RNPGBE_MAC_BASE;
 	mac->back = hw;
 	/* set mac->mii */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
index bcb4da45feac..98031600801b 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
@@ -22,9 +22,13 @@
 #define RX_AXI_RW_EN (0x03 << 0)
 #define TX_AXI_RW_EN (0x03 << 2)
 #define RNPGBE_DMA_RX_PROG_FULL_THRESH (0x00a0)
+#define DMA_PKT_FIFO_DATA_PROG_FULL_THRESH (0x0098)
 #define RING_VECTOR(n) (0x04 * (n))
+
 /* eth regs */
 #define RNPGBE_ETH_BYPASS (0x8000)
+#define RNPGBE_HOST_FILTER_EN (0x800c)
+#define RNPGBE_REDIR_EN (0x8030)
 #define RNPGBE_ETH_ERR_MASK_VECTOR (0x8060)
 #define RNPGBE_ETH_DEFAULT_RX_RING (0x806c)
 #define RNPGBE_PKT_LEN_ERR (2)
@@ -35,6 +39,13 @@
 #define RNPGBE_ETH_RAR_RL(n) (0xa000 + 0x04 * (n))
 #define RNPGBE_ETH_RAR_RH(n) (0xa400 + 0x04 * (n))
 #define RNPGBE_ETH_MUTICAST_HASH_TABLE(n) (0xac00 + 0x04 * (n))
+
+#define RNPGBE_LEGANCY_ENABLE (0xd004)
+#define RNPGBE_LEGANCY_TIME (0xd000)
+/* mac regs */
+#define M_RAH_AV 0x80000000
+#define RNPGBE_MAC_UNICAST_LOW(i) (0x44 + (i) * 0x08)
+#define RNPGBE_MAC_UNICAST_HIGH(i) (0x40 + (i) * 0x08)
 /* chip resourse */
 #define RNPGBE_MAX_QUEUES (8)
 /* multicast control table */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
index 0dbb942eb4c7..26fdac7d52a9 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
@@ -818,3 +818,310 @@ void rnpgbe_free_txrx(struct mucse *mucse)
 	rnpgbe_free_all_tx_resources(mucse);
 	rnpgbe_free_all_rx_resources(mucse);
 }
+
+/**
+ * rnpgbe_configure_tx_ring - Configure Tx ring after Reset
+ * @mucse: pointer to private structure
+ * @ring: structure containing ring specific data
+ *
+ * Configure the Tx descriptor ring after a reset.
+ **/
+static void rnpgbe_configure_tx_ring(struct mucse *mucse,
+				     struct mucse_ring *ring)
+{
+	struct mucse_hw *hw = &mucse->hw;
+	int timeout = 0;
+	u32 status = 0;
+
+	ring_wr32(ring, DMA_TX_START, 0);
+	ring_wr32(ring, DMA_REG_TX_DESC_BUF_BASE_ADDR_LO, (u32)ring->dma);
+	ring_wr32(ring, DMA_REG_TX_DESC_BUF_BASE_ADDR_HI,
+		  (u32)(((u64)ring->dma) >> 32) | (hw->pfvfnum << 24));
+	ring_wr32(ring, DMA_REG_TX_DESC_BUF_LEN, ring->count);
+	ring->next_to_clean = ring_rd32(ring, DMA_REG_TX_DESC_BUF_HEAD);
+	ring->next_to_use = ring->next_to_clean;
+	ring->tail = ring->ring_addr + DMA_REG_TX_DESC_BUF_TAIL;
+	m_wr_reg(ring->tail, ring->next_to_use);
+	ring_wr32(ring, DMA_REG_TX_DESC_FETCH_CTRL,
+		  (8 << 0) | (TX_DEFAULT_BURST << 16));
+	ring_wr32(ring, DMA_REG_TX_INT_DELAY_TIMER,
+		  mucse->tx_usecs * hw->usecstocount);
+	ring_wr32(ring, DMA_REG_TX_INT_DELAY_PKTCNT, mucse->tx_frames);
+	do {
+		status = ring_rd32(ring, DMA_TX_READY);
+		usleep_range(100, 200);
+		timeout++;
+	} while ((status != 1) && (timeout < 100));
+	ring_wr32(ring, DMA_TX_START, 1);
+}
+
+/**
+ * rnpgbe_configure_tx - Configure Transmit Unit after Reset
+ * @mucse: pointer to private structure
+ *
+ * Configure the Tx DMA after a reset.
+ **/
+void rnpgbe_configure_tx(struct mucse *mucse)
+{
+	u32 i;
+
+	/* Setup the HW Tx Head and Tail descriptor pointers */
+	for (i = 0; i < (mucse->num_tx_queues); i++)
+		rnpgbe_configure_tx_ring(mucse, mucse->tx_ring[i]);
+}
+
+void rnpgbe_disable_rx_queue(struct mucse_ring *ring)
+{
+	ring_wr32(ring, DMA_RX_START, 0);
+}
+
+#if (PAGE_SIZE < 8192)
+static inline int rnpgbe_compute_pad(int rx_buf_len)
+{
+	int page_size, pad_size;
+
+	page_size = ALIGN(rx_buf_len, PAGE_SIZE / 2);
+	pad_size = SKB_WITH_OVERHEAD(page_size) - rx_buf_len;
+
+	return pad_size;
+}
+
+static inline int rnpgbe_sg_size(void)
+{
+	int sg_size = SKB_WITH_OVERHEAD(PAGE_SIZE / 2) - NET_SKB_PAD;
+
+	sg_size -= NET_IP_ALIGN;
+	sg_size = ALIGN_DOWN(sg_size, 4);
+
+	return sg_size;
+}
+
+#define SG_SIZE  rnpgbe_sg_size()
+static inline int rnpgbe_skb_pad(void)
+{
+	int rx_buf_len = SG_SIZE;
+
+	return rnpgbe_compute_pad(rx_buf_len);
+}
+
+#define RNP_SKB_PAD rnpgbe_skb_pad()
+static inline unsigned int rnpgbe_rx_offset(void)
+{
+	return RNP_SKB_PAD;
+}
+
+#else /* PAGE_SIZE < 8192 */
+#define RNP_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
+#endif
+
+static void rnpgbe_configure_rx_ring(struct mucse *mucse,
+				     struct mucse_ring *ring)
+{
+	struct mucse_hw *hw = &mucse->hw;
+	u64 desc_phy = ring->dma;
+	int split_size;
+	/* disable queue to avoid issues while updating state */
+	rnpgbe_disable_rx_queue(ring);
+
+	/* set descripts registers*/
+	ring_wr32(ring, DMA_REG_RX_DESC_BUF_BASE_ADDR_LO, (u32)desc_phy);
+	ring_wr32(ring, DMA_REG_RX_DESC_BUF_BASE_ADDR_HI,
+		  ((u32)(desc_phy >> 32)) | (hw->pfvfnum << 24));
+	ring_wr32(ring, DMA_REG_RX_DESC_BUF_LEN, ring->count);
+	ring->tail = ring->ring_addr + DMA_REG_RX_DESC_BUF_TAIL;
+	ring->next_to_clean = ring_rd32(ring, DMA_REG_RX_DESC_BUF_HEAD);
+	ring->next_to_use = ring->next_to_clean;
+
+#if (PAGE_SIZE < 8192)
+	split_size = SG_SIZE;
+	split_size = split_size >> 4;
+#else
+	/* we use fixed sg size */
+	split_size = 96;
+#endif
+	ring_wr32(ring, DMA_REG_RX_SCATTER_LENGTH, split_size);
+	ring_wr32(ring, DMA_REG_RX_DESC_FETCH_CTRL,
+		  0 | (RX_DEFAULT_LINE << 0) |
+		  (RX_DEFAULT_BURST << 16));
+	/* if ncsi card ,maybe should setup this */
+	/* drop packets if no rx-desc in 100000 clks, maybe os crash */
+	if (hw->ncsi_en)
+		ring_wr32(ring, DMA_REG_RX_DESC_TIMEOUT_TH, 100000);
+	else
+		ring_wr32(ring, DMA_REG_RX_DESC_TIMEOUT_TH, 0);
+	ring_wr32(ring, DMA_REG_RX_INT_DELAY_TIMER,
+		  mucse->rx_usecs * hw->usecstocount);
+	ring_wr32(ring, DMA_REG_RX_INT_DELAY_PKTCNT, mucse->rx_frames);
+}
+
+/**
+ * rnpgbe_configure_rx - Configure 8259x Receive Unit after Reset
+ * @mucse: pointer to private structure
+ *
+ * Configure the Rx unit of the MAC after a reset.
+ **/
+void rnpgbe_configure_rx(struct mucse *mucse)
+{
+	int i;
+
+	for (i = 0; i < mucse->num_rx_queues; i++)
+		rnpgbe_configure_rx_ring(mucse, mucse->rx_ring[i]);
+}
+
+static irqreturn_t rnpgbe_msix_clean_rings(int irq, void *data)
+{
+	return IRQ_HANDLED;
+}
+
+static void rnpgbe_irq_affinity_notify(struct irq_affinity_notify *notify,
+				       const cpumask_t *mask)
+{
+	struct mucse_q_vector *q_vector =
+		container_of(notify, struct mucse_q_vector, affinity_notify);
+
+	cpumask_copy(&q_vector->affinity_mask, mask);
+}
+
+static void rnpgbe_irq_affinity_release(struct kref *ref)
+{
+}
+
+/**
+ * rnpgbe_request_msix_irqs - Initialize MSI-X interrupts
+ * @mucse: pointer to private structure
+ *
+ * rnpgbe_request_msix_irqs allocates MSI-X vectors and requests
+ * interrupts from the kernel.
+ **/
+static int rnpgbe_request_msix_irqs(struct mucse *mucse)
+{
+	struct net_device *netdev = mucse->netdev;
+	int err;
+	int i = 0;
+	int q_off = mucse->q_vector_off;
+	struct msix_entry *entry;
+
+	for (i = 0; i < mucse->num_q_vectors; i++) {
+		struct mucse_q_vector *q_vector = mucse->q_vector[i];
+
+		entry = &mucse->msix_entries[i + q_off];
+		if (q_vector->tx.ring && q_vector->rx.ring) {
+			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+				 "%s-%s-%d-%d", netdev->name, "TxRx", i,
+				 q_vector->v_idx);
+		} else {
+			/* skip this unused q_vector */
+			continue;
+		}
+		err = request_irq(entry->vector, &rnpgbe_msix_clean_rings, 0,
+				  q_vector->name, q_vector);
+		if (err)
+			goto free_queue_irqs;
+		/* register for affinity change notifications */
+		q_vector->affinity_notify.notify = rnpgbe_irq_affinity_notify;
+		q_vector->affinity_notify.release = rnpgbe_irq_affinity_release;
+		irq_set_affinity_notifier(entry->vector,
+					  &q_vector->affinity_notify);
+		irq_set_affinity_hint(entry->vector, &q_vector->affinity_mask);
+	}
+
+	return 0;
+
+free_queue_irqs:
+	while (i) {
+		i--;
+		entry = &mucse->msix_entries[i + q_off];
+		irq_set_affinity_hint(entry->vector, NULL);
+		free_irq(entry->vector, mucse->q_vector[i]);
+		irq_set_affinity_notifier(entry->vector, NULL);
+		irq_set_affinity_hint(entry->vector, NULL);
+	}
+	return err;
+}
+
+static irqreturn_t rnpgbe_intr(int irq, void *data)
+{
+	return IRQ_HANDLED;
+}
+
+/**
+ * rnpgbe_request_irq - initialize interrupts
+ * @mucse: pointer to private structure
+ *
+ * Attempts to configure interrupts using the best available
+ * capabilities of the hardware and kernel.
+ **/
+int rnpgbe_request_irq(struct mucse *mucse)
+{
+	int err;
+	struct mucse_hw *hw = &mucse->hw;
+
+	if (mucse->flags & M_FLAG_MSIX_ENABLED) {
+		pr_info("msix mode is used\n");
+		err = rnpgbe_request_msix_irqs(mucse);
+		hw->ops.set_irq_mode(hw, 0);
+	} else if (mucse->flags & M_FLAG_MSI_ENABLED) {
+		/* in this case one for all */
+		pr_info("msi mode is used\n");
+		err = request_irq(mucse->pdev->irq, rnpgbe_intr, 0,
+				  mucse->netdev->name, mucse);
+		mucse->hw.mbx.irq_enabled = true;
+		hw->ops.set_irq_mode(hw, 0);
+	} else {
+		pr_info("legacy mode is used\n");
+		err = request_irq(mucse->pdev->irq, rnpgbe_intr, IRQF_SHARED,
+				  mucse->netdev->name, mucse);
+		hw->ops.set_irq_mode(hw, 1);
+		mucse->hw.mbx.irq_enabled = true;
+	}
+	return err;
+}
+
+/**
+ * rnpgbe_free_msix_irqs - Free MSI-X interrupts
+ * @mucse: pointer to private structure
+ *
+ * rnpgbe_free_msix_irqs free MSI-X vectors and requests
+ * interrupts.
+ **/
+static int rnpgbe_free_msix_irqs(struct mucse *mucse)
+{
+	int i;
+	int q_off = mucse->q_vector_off;
+	struct msix_entry *entry;
+	struct mucse_q_vector *q_vector;
+
+	for (i = 0; i < mucse->num_q_vectors; i++) {
+		q_vector = mucse->q_vector[i];
+		entry = &mucse->msix_entries[i + q_off];
+		/* free only the irqs that were actually requested */
+		if (!q_vector->rx.ring && !q_vector->tx.ring)
+			continue;
+		/* clear the affinity notifier in the IRQ descriptor */
+		irq_set_affinity_notifier(entry->vector, NULL);
+		/* clear the affinity_mask in the IRQ descriptor */
+		irq_set_affinity_hint(entry->vector, NULL);
+		free_irq(entry->vector, q_vector);
+	}
+	return 0;
+}
+
+/**
+ * rnpgbe_free_irq - free interrupts
+ * @mucse: pointer to private structure
+ *
+ * Attempts to free interrupts according initialized type.
+ **/
+void rnpgbe_free_irq(struct mucse *mucse)
+{
+	if (mucse->flags & M_FLAG_MSIX_ENABLED) {
+		rnpgbe_free_msix_irqs(mucse);
+	} else if (mucse->flags & M_FLAG_MSI_ENABLED) {
+		/* in this case one for all */
+		free_irq(mucse->pdev->irq, mucse);
+		mucse->hw.mbx.irq_enabled = false;
+	} else {
+		free_irq(mucse->pdev->irq, mucse);
+		mucse->hw.mbx.irq_enabled = false;
+	}
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
index 150d03f9ada9..818bd0cabe0c 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
@@ -16,6 +16,31 @@
 #define RX_INT_MASK (0x01)
 #define DMA_INT_CLR (0x28)
 #define DMA_INT_STAT (0x20)
+#define DMA_REG_RX_DESC_BUF_BASE_ADDR_HI (0x30)
+#define DMA_REG_RX_DESC_BUF_BASE_ADDR_LO (0x34)
+#define DMA_REG_RX_DESC_BUF_LEN (0x38)
+#define DMA_REG_RX_DESC_BUF_HEAD (0x3c)
+#define DMA_REG_RX_DESC_BUF_TAIL (0x40)
+#define DMA_REG_RX_DESC_FETCH_CTRL (0x44)
+#define DMA_REG_RX_INT_DELAY_TIMER (0x48)
+#define DMA_REG_RX_INT_DELAY_PKTCNT (0x4c)
+#define DMA_REG_RX_ARB_DEF_LVL (0x50)
+#define DMA_REG_RX_DESC_TIMEOUT_TH (0x54)
+#define DMA_REG_RX_SCATTER_LENGTH (0x58)
+#define DMA_REG_TX_DESC_BUF_BASE_ADDR_HI (0x60)
+#define DMA_REG_TX_DESC_BUF_BASE_ADDR_LO (0x64)
+#define DMA_REG_TX_DESC_BUF_LEN (0x68)
+#define DMA_REG_TX_DESC_BUF_HEAD (0x6c)
+#define DMA_REG_TX_DESC_BUF_TAIL (0x70)
+#define DMA_REG_TX_DESC_FETCH_CTRL (0x74)
+#define DMA_REG_TX_INT_DELAY_TIMER (0x78)
+#define DMA_REG_TX_INT_DELAY_PKTCNT (0x7c)
+#define DMA_REG_TX_ARB_DEF_LVL (0x80)
+#define DMA_REG_TX_FLOW_CTRL_TH (0x84)
+#define DMA_REG_TX_FLOW_CTRL_TM (0x88)
+#define TX_DEFAULT_BURST (8)
+#define RX_DEFAULT_LINE (32)
+#define RX_DEFAULT_BURST (16)
 
 #define mucse_for_each_ring(pos, head)  \
 	for (pos = (head).ring; pos; pos = pos->next)
@@ -24,5 +49,10 @@ int rnpgbe_init_interrupt_scheme(struct mucse *mucse);
 void rnpgbe_clear_interrupt_scheme(struct mucse *mucse);
 int rnpgbe_setup_txrx(struct mucse *mucse);
 void rnpgbe_free_txrx(struct mucse *mucse);
+void rnpgbe_configure_tx(struct mucse *mucse);
+void rnpgbe_disable_rx_queue(struct mucse_ring *ring);
+void rnpgbe_configure_rx(struct mucse *mucse);
+int rnpgbe_request_irq(struct mucse *mucse);
+void rnpgbe_free_irq(struct mucse *mucse);
 
 #endif /* _RNPGBE_LIB_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index 95a68b6d08a5..82acf45ad901 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -195,6 +195,16 @@ static int init_firmware_for_n210(struct mucse_hw *hw)
 	return err;
 }
 
+static void rnpgbe_configure(struct mucse *mucse)
+{
+	struct mucse_hw *hw = &mucse->hw;
+
+	hw->ops.set_mac(hw, hw->addr);
+	hw->ops.update_hw_info(hw);
+	rnpgbe_configure_tx(mucse);
+	rnpgbe_configure_rx(mucse);
+}
+
 /**
  * rnpgbe_open - Called when a network interface is made active
  * @netdev: network interface device structure
@@ -215,7 +225,20 @@ static int rnpgbe_open(struct net_device *netdev)
 
 	netif_carrier_off(netdev);
 	err = rnpgbe_setup_txrx(mucse);
-
+	rnpgbe_configure(mucse);
+	err = rnpgbe_request_irq(mucse);
+	if (err)
+		goto err_req_irq;
+	err = netif_set_real_num_tx_queues(netdev, mucse->num_tx_queues);
+	if (err)
+		goto err_set_queues;
+	err = netif_set_real_num_rx_queues(netdev, mucse->num_rx_queues);
+	if (err)
+		goto err_set_queues;
+err_req_irq:
+	rnpgbe_free_txrx(mucse);
+err_set_queues:
+	rnpgbe_free_irq(mucse);
 	return err;
 }
 
@@ -232,6 +255,7 @@ static int rnpgbe_close(struct net_device *netdev)
 {
 	struct mucse *mucse = netdev_priv(netdev);
 
+	rnpgbe_free_irq(mucse);
 	rnpgbe_free_txrx(mucse);
 
 	return 0;
@@ -582,8 +606,10 @@ static void __rnpgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 	*enable_wake = false;
 	netif_device_detach(netdev);
 	rtnl_lock();
-	if (netif_running(netdev))
+	if (netif_running(netdev)) {
+		rnpgbe_free_irq(mucse);
 		rnpgbe_free_txrx(mucse);
+	}
 	rtnl_unlock();
 	remove_mbx_irq(mucse);
 	rnpgbe_clear_interrupt_scheme(mucse);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 11/15] net: rnpgbe: Add setup hw ring-vector, true up/down hw
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (9 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 10/15] net: rnpgbe: Add netdev irq in open Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 12/15] net: rnpgbe: Add link up handler Dong Yibo
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize ring-vector setup up hw in open func.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |   4 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   |  14 +++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c    | 113 ++++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h    |  72 +++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  56 +++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c |  90 ++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h |  31 ++++-
 7 files changed, 379 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index d4e150c14582..c049952f41e8 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -183,6 +183,8 @@ struct mucse_hw_operations {
 	void (*update_hw_info)(struct mucse_hw *hw);
 	void (*set_mac)(struct mucse_hw *hw, u8 *mac);
 	void (*set_irq_mode)(struct mucse_hw *hw, bool legacy);
+	void (*set_mbx_link_event)(struct mucse_hw *hw, int enable);
+	void (*set_mbx_ifup)(struct mucse_hw *hw, int enable);
 };
 
 enum {
@@ -531,6 +533,7 @@ struct mucse {
 	struct net_device *netdev;
 	struct pci_dev *pdev;
 	struct mucse_hw hw;
+	u16 msg_enable;
 	/* board number */
 	u16 bd_number;
 	u16 tx_work_limit;
@@ -563,6 +566,7 @@ struct mucse {
 	u16 tx_frames;
 	u16 tx_usecs;
 	unsigned long state;
+	unsigned long link_check_timeout;
 	struct timer_list service_timer;
 	struct work_struct service_task;
 	char name[60];
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
index 5ad287e398a7..7cc9134952bf 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -288,6 +288,18 @@ static void rnpgbe_set_irq_mode_n500(struct mucse_hw *hw, bool legacy)
 	}
 }
 
+static void rnpgbe_set_mbx_link_event_hw_ops_n500(struct mucse_hw *hw,
+						  int enable)
+{
+	mucse_mbx_link_event_enable(hw, enable);
+}
+
+static void rnpgbe_set_mbx_ifup_hw_ops_n500(struct mucse_hw *hw,
+					    int enable)
+{
+	mucse_mbx_ifup_down(hw, enable);
+}
+
 static struct mucse_hw_operations hw_ops_n500 = {
 	.init_hw = &rnpgbe_init_hw_ops_n500,
 	.reset_hw = &rnpgbe_reset_hw_ops_n500,
@@ -297,6 +309,8 @@ static struct mucse_hw_operations hw_ops_n500 = {
 	.set_mac = &rnpgbe_set_mac_hw_ops_n500,
 	.update_hw_info = &rnpgbe_update_hw_info_hw_ops_n500,
 	.set_irq_mode = &rnpgbe_set_irq_mode_n500,
+	.set_mbx_link_event = &rnpgbe_set_mbx_link_event_hw_ops_n500,
+	.set_mbx_ifup = &rnpgbe_set_mbx_ifup_hw_ops_n500,
 };
 
 /**
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
index 26fdac7d52a9..fec084e20513 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
@@ -1125,3 +1125,116 @@ void rnpgbe_free_irq(struct mucse *mucse)
 		mucse->hw.mbx.irq_enabled = false;
 	}
 }
+
+/**
+ * rnpgbe_napi_enable_all - enable all napi
+ * @mucse: pointer to private structure
+ *
+ * Enable all napi for this net.
+ **/
+void rnpgbe_napi_enable_all(struct mucse *mucse)
+{
+	int q_idx;
+
+	for (q_idx = 0; q_idx < mucse->num_q_vectors; q_idx++)
+		napi_enable(&mucse->q_vector[q_idx]->napi);
+}
+
+/**
+ * rnpgbe_napi_disable_all - disable all napi
+ * @mucse: pointer to private structure
+ *
+ * Disable all napi for this net.
+ **/
+void rnpgbe_napi_disable_all(struct mucse *mucse)
+{
+	int q_idx;
+
+	for (q_idx = 0; q_idx < mucse->num_q_vectors; q_idx++)
+		napi_disable(&mucse->q_vector[q_idx]->napi);
+}
+
+/**
+ * rnpgbe_set_ring_vector - set the ring_vector registers,
+ * mapping interrupt causes to vectors
+ * @mucse: pointer to adapter struct
+ * @queue: queue to map the corresponding interrupt to
+ * @msix_vector: the vector to map to the corresponding queue
+ *
+ */
+static void rnpgbe_set_ring_vector(struct mucse *mucse,
+				   u8 queue, u8 msix_vector)
+{
+	struct mucse_hw *hw = &mucse->hw;
+	u32 data = 0;
+
+	data = hw->pfvfnum << 24;
+	data |= (msix_vector << 8);
+	data |= (msix_vector << 0);
+	m_wr_reg(hw->ring_msix_base + RING_VECTOR(queue), data);
+}
+
+/**
+ * rnpgbe_configure_msix - Configure MSI-X hardware
+ * @mucse: pointer to private structure
+ *
+ * rnpgbe_configure_msix sets up the hardware to properly generate MSI-X
+ * interrupts.
+ **/
+void rnpgbe_configure_msix(struct mucse *mucse)
+{
+	struct mucse_q_vector *q_vector;
+	int i;
+	struct mucse_hw *hw = &mucse->hw;
+
+	/*
+	 * configure ring-msix Registers table
+	 */
+	for (i = 0; i < mucse->num_q_vectors; i++) {
+		struct mucse_ring *ring;
+
+		q_vector = mucse->q_vector[i];
+		mucse_for_each_ring(ring, q_vector->rx) {
+			rnpgbe_set_ring_vector(mucse, ring->rnpgbe_queue_idx,
+					       q_vector->v_idx);
+		}
+	}
+	/* n500 should mask other */
+	if (hw->hw_type == rnpgbe_hw_n500 ||
+	    hw->hw_type == rnpgbe_hw_n210 ||
+	    hw->hw_type == rnpgbe_hw_n210L) {
+		/*
+		 *  8  lpi | PMT
+		 *  9  BMC_RX_IRQ |
+		 *  10 PHY_IRQ | LPI_IRQ
+		 *  11 BMC_TX_IRQ |
+		 *  may DMAR error if set pf to vm
+		 */
+#define OTHER_VECTOR_START (8)
+#define OTHER_VECTOR_STOP (11)
+#define MSIX_UNUSED (0x0f0f)
+		for (i = OTHER_VECTOR_START; i <= OTHER_VECTOR_STOP; i++) {
+			if (hw->feature_flags & M_HW_SOFT_MASK_OTHER_IRQ) {
+				m_wr_reg(hw->ring_msix_base +
+					 RING_VECTOR(i),
+					 MSIX_UNUSED);
+			} else {
+				m_wr_reg(hw->ring_msix_base +
+					 RING_VECTOR(i), 0);
+			}
+		}
+		if (hw->feature_flags & M_HW_FEATURE_EEE) {
+#define LPI_IRQ (8)
+			/* only open lpi irq */
+			if (hw->feature_flags & M_HW_SOFT_MASK_OTHER_IRQ) {
+				m_wr_reg(hw->ring_msix_base +
+					 RING_VECTOR(LPI_IRQ),
+					 0x000f);
+			} else {
+				m_wr_reg(hw->ring_msix_base +
+					 RING_VECTOR(LPI_IRQ),
+					 0x0000);
+			}
+		}
+	}
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
index 818bd0cabe0c..65bd97c26eaf 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
@@ -7,6 +7,7 @@
 #include "rnpgbe.h"
 
 #define RING_OFFSET(n) (0x100 * (n))
+#define DMA_DUMY (0xc)
 #define DMA_RX_START (0x10)
 #define DMA_RX_READY (0x14)
 #define DMA_TX_START (0x18)
@@ -14,6 +15,8 @@
 #define DMA_INT_MASK (0x24)
 #define TX_INT_MASK (0x02)
 #define RX_INT_MASK (0x01)
+#define DMA_INT_TRIG (0x2c)
+#define INT_VALID (0x3 << 16)
 #define DMA_INT_CLR (0x28)
 #define DMA_INT_STAT (0x20)
 #define DMA_REG_RX_DESC_BUF_BASE_ADDR_HI (0x30)
@@ -42,9 +45,75 @@
 #define RX_DEFAULT_LINE (32)
 #define RX_DEFAULT_BURST (16)
 
+#define RING_VECTOR(n) (0x04 * (n))
 #define mucse_for_each_ring(pos, head)  \
 	for (pos = (head).ring; pos; pos = pos->next)
 
+#define e_info(msglvl, format, arg...)  \
+	netif_info(mucse, msglvl, mucse->netdev, format, ##arg)
+
+enum link_event_mask {
+	EVT_LINK_UP = 1,
+	EVT_NO_MEDIA = 2,
+	EVT_LINK_FAULT = 3,
+	EVT_PHY_TEMP_ALARM = 4,
+	EVT_EXCESSIVE_ERRORS = 5,
+	EVT_SIGNAL_DETECT = 6,
+	EVT_AUTO_NEGOTIATION_DONE = 7,
+	EVT_MODULE_QUALIFICATION_FAILD = 8,
+	EVT_PORT_TX_SUSPEND = 9,
+};
+
+static inline void rnpgbe_irq_enable_queues(struct mucse *mucse,
+					    struct mucse_q_vector *q_vector)
+{
+	struct mucse_ring *ring;
+
+	mucse_for_each_ring(ring, q_vector->rx) {
+		m_wr_reg(ring->dma_int_mask, ~(RX_INT_MASK | TX_INT_MASK));
+		ring_wr32(ring, DMA_INT_TRIG, INT_VALID | TX_INT_MASK |
+			  RX_INT_MASK);
+	}
+}
+
+static inline void rnpgbe_irq_enable(struct mucse *mucse)
+{
+	int i;
+
+	for (i = 0; i < mucse->num_q_vectors; i++)
+		rnpgbe_irq_enable_queues(mucse, mucse->q_vector[i]);
+}
+
+static inline void rnpgbe_irq_disable_queues(struct mucse_q_vector *q_vector)
+{
+	struct mucse_ring *ring;
+
+	mucse_for_each_ring(ring, q_vector->tx) {
+		ring_wr32(ring, DMA_INT_TRIG,
+			  (0x3 << 16) | (~(TX_INT_MASK | RX_INT_MASK)));
+		m_wr_reg(ring->dma_int_mask, (RX_INT_MASK | TX_INT_MASK));
+	}
+}
+
+/**
+ * rnpgbe_irq_disable - Mask off interrupt generation on the NIC
+ * @adapter: board private structure
+ **/
+static inline void rnpgbe_irq_disable(struct mucse *mucse)
+{
+	int i, j;
+
+	for (i = 0; i < mucse->num_q_vectors; i++) {
+		rnpgbe_irq_disable_queues(mucse->q_vector[i]);
+		j = i + mucse->q_vector_off;
+
+		if (mucse->flags & M_FLAG_MSIX_ENABLED)
+			synchronize_irq(mucse->msix_entries[j].vector);
+		else
+			synchronize_irq(mucse->pdev->irq);
+	}
+}
+
 int rnpgbe_init_interrupt_scheme(struct mucse *mucse);
 void rnpgbe_clear_interrupt_scheme(struct mucse *mucse);
 int rnpgbe_setup_txrx(struct mucse *mucse);
@@ -54,5 +123,8 @@ void rnpgbe_disable_rx_queue(struct mucse_ring *ring);
 void rnpgbe_configure_rx(struct mucse *mucse);
 int rnpgbe_request_irq(struct mucse *mucse);
 void rnpgbe_free_irq(struct mucse *mucse);
+void rnpgbe_napi_enable_all(struct mucse *mucse);
+void rnpgbe_napi_disable_all(struct mucse *mucse);
+void rnpgbe_configure_msix(struct mucse *mucse);
 
 #endif /* _RNPGBE_LIB_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index 82acf45ad901..01cff0a780ff 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -47,6 +47,11 @@ static struct pci_device_id rnpgbe_pci_tbl[] = {
 	{0, },
 };
 
+#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK)
+static int debug = -1;
+module_param(debug, int, 0);
+MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
+
 static struct workqueue_struct *rnpgbe_wq;
 
 void rnpgbe_service_event_schedule(struct mucse *mucse)
@@ -205,6 +210,36 @@ static void rnpgbe_configure(struct mucse *mucse)
 	rnpgbe_configure_rx(mucse);
 }
 
+static void rnpgbe_up_complete(struct mucse *mucse)
+{
+	struct mucse_hw *hw = &mucse->hw;
+	int i;
+
+	rnpgbe_configure_msix(mucse);
+	/* we need this */
+	smp_mb__before_atomic();
+	clear_bit(__MUCSE_DOWN, &mucse->state);
+	rnpgbe_napi_enable_all(mucse);
+	/* clear any pending interrupts*/
+	rnpgbe_irq_enable(mucse);
+	/* enable transmits */
+	netif_tx_start_all_queues(mucse->netdev);
+	/* enable rx transmit */
+	for (i = 0; i < mucse->num_rx_queues; i++)
+		ring_wr32(mucse->rx_ring[i], DMA_RX_START, 1);
+
+	/* bring the link up in the watchdog */
+	mucse->flags |= M_FLAG_NEED_LINK_UPDATE;
+	mucse->link_check_timeout = jiffies;
+	mod_timer(&mucse->service_timer, jiffies);
+
+	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
+	/* maybe differ in n500 */
+	hw->link = 0;
+	hw->ops.set_mbx_link_event(hw, 1);
+	hw->ops.set_mbx_ifup(hw, 1);
+}
+
 /**
  * rnpgbe_open - Called when a network interface is made active
  * @netdev: network interface device structure
@@ -235,6 +270,7 @@ static int rnpgbe_open(struct net_device *netdev)
 	err = netif_set_real_num_rx_queues(netdev, mucse->num_rx_queues);
 	if (err)
 		goto err_set_queues;
+	rnpgbe_up_complete(mucse);
 err_req_irq:
 	rnpgbe_free_txrx(mucse);
 err_set_queues:
@@ -242,6 +278,24 @@ static int rnpgbe_open(struct net_device *netdev)
 	return err;
 }
 
+static void rnpgbe_down(struct mucse *mucse)
+{
+	struct mucse_hw *hw = &mucse->hw;
+	struct net_device *netdev = mucse->netdev;
+
+	set_bit(__MUCSE_DOWN, &mucse->state);
+	hw->ops.set_mbx_link_event(hw, 0);
+	hw->ops.set_mbx_ifup(hw, 0);
+	if (netif_carrier_ok(netdev))
+		e_info(drv, "NIC Link is Down\n");
+	netif_tx_stop_all_queues(netdev);
+	netif_carrier_off(netdev);
+	rnpgbe_irq_disable(mucse);
+	netif_tx_disable(netdev);
+	rnpgbe_napi_disable_all(mucse);
+	mucse->flags &= ~M_FLAG_NEED_LINK_UPDATE;
+}
+
 /**
  * rnpgbe_close - Disables a network interface
  * @netdev: network interface device structure
@@ -255,6 +309,7 @@ static int rnpgbe_close(struct net_device *netdev)
 {
 	struct mucse *mucse = netdev_priv(netdev);
 
+	rnpgbe_down(mucse);
 	rnpgbe_free_irq(mucse);
 	rnpgbe_free_txrx(mucse);
 
@@ -443,6 +498,7 @@ static int rnpgbe_add_adpater(struct pci_dev *pdev,
 	hw->hw_addr = hw_addr;
 	hw->dma.dma_version = dma_version;
 	hw->driver_version = driver_version;
+	mucse->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
 	hw->nr_lane = 0;
 	ii->get_invariants(hw);
 	hw->mbx.ops.init_params(hw);
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
index f86fb81f4db4..fc6c0dbfff84 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
@@ -4,6 +4,7 @@
 #include <linux/pci.h>
 
 #include "rnpgbe.h"
+#include "rnpgbe_lib.h"
 #include "rnpgbe_mbx_fw.h"
 
 /**
@@ -215,6 +216,47 @@ static int mucse_mbx_fw_post_req(struct mucse_hw *hw,
 	return err;
 }
 
+/**
+ * mucse_mbx_write_posted_locked - Posts a mbx req to firmware and
+ * polling until hw has read out.
+ * @hw: Pointer to the HW structure
+ * @req: Pointer to the cmd req structure
+ *
+ * mucse_mbx_write_posted_locked posts a mbx req to firmware and
+ * polling until hw has read out.
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int mucse_mbx_write_posted_locked(struct mucse_hw *hw,
+					 struct mbx_fw_cmd_req *req)
+{
+	int err = 0;
+	int retry = 3;
+
+	/* if pcie off, nothing todo */
+	if (pci_channel_offline(hw->pdev))
+		return -EIO;
+
+	if (mutex_lock_interruptible(&hw->mbx.lock))
+		return -EAGAIN;
+try_again:
+	retry--;
+	if (retry < 0) {
+		mutex_unlock(&hw->mbx.lock);
+		return -EIO;
+	}
+
+	err = hw->mbx.ops.write_posted(hw, (u32 *)req,
+				       L_WD(req->datalen + MBX_REQ_HDR_LEN),
+				       MBX_FW);
+	if (err)
+		goto try_again;
+
+	mutex_unlock(&hw->mbx.lock);
+
+	return err;
+}
+
 int rnpgbe_mbx_lldp_get(struct mucse_hw *hw)
 {
 	int err;
@@ -411,3 +453,51 @@ int mucse_fw_get_macaddr(struct mucse_hw *hw, int pfvfnum,
 out:
 	return err;
 }
+
+int mucse_mbx_link_event_enable(struct mucse_hw *hw, int enable)
+{
+	struct mbx_fw_cmd_reply reply;
+	struct mbx_fw_cmd_req req;
+	int err;
+
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+
+	if (enable)
+		hw_wr32(hw, DMA_DUMY, 0xa0000000);
+
+	build_link_set_event_mask(&req, BIT(EVT_LINK_UP),
+				  (enable & 1) << EVT_LINK_UP, &req);
+
+	err = mucse_mbx_write_posted_locked(hw, &req);
+	if (!enable)
+		hw_wr32(hw, DMA_DUMY, 0);
+
+	return err;
+}
+
+int mucse_mbx_ifup_down(struct mucse_hw *hw, int up)
+{
+	int err;
+	struct mbx_fw_cmd_req req;
+	struct mbx_fw_cmd_reply reply;
+
+	memset(&req, 0, sizeof(req));
+	memset(&reply, 0, sizeof(reply));
+
+	build_ifup_down(&req, hw->nr_lane, up);
+
+	if (mutex_lock_interruptible(&hw->mbx.lock))
+		return -EAGAIN;
+
+	err = hw->mbx.ops.write_posted(hw,
+				       (u32 *)&req,
+				       L_WD(req.datalen + MBX_REQ_HDR_LEN),
+				       MBX_FW);
+
+	mutex_unlock(&hw->mbx.lock);
+	if (up)
+		hw_wr32(hw, DMA_DUMY, 0xa0000000);
+
+	return err;
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
index babdfc1f56f1..cd5a98acd983 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
@@ -609,6 +609,34 @@ static inline void build_get_macaddress_req(struct mbx_fw_cmd_req *req,
 	req->get_mac_addr.pfvf_num = pfvfnum;
 }
 
+static inline void build_link_set_event_mask(struct mbx_fw_cmd_req *req,
+					     unsigned short event_mask,
+					     unsigned short enable,
+					     void *cookie)
+{
+	req->flags = 0;
+	req->opcode = SET_EVENT_MASK;
+	req->datalen = sizeof(req->stat_event_mask);
+	req->cookie = cookie;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+	req->stat_event_mask.event_mask = event_mask;
+	req->stat_event_mask.enable_stat = enable;
+}
+
+static inline void build_ifup_down(struct mbx_fw_cmd_req *req,
+				   unsigned int nr_lane, int up)
+{
+	req->flags = 0;
+	req->opcode = IFUP_DOWN;
+	req->datalen = sizeof(req->ifup);
+	req->cookie = NULL;
+	req->reply_lo = 0;
+	req->reply_hi = 0;
+	req->ifup.lane = nr_lane;
+	req->ifup.up = up;
+}
+
 int mucse_mbx_get_capability(struct mucse_hw *hw);
 int rnpgbe_mbx_lldp_get(struct mucse_hw *hw);
 int mucse_mbx_ifinsmod(struct mucse_hw *hw, int status);
@@ -617,5 +645,6 @@ int mucse_mbx_ifforce_control_mac(struct mucse_hw *hw, int status);
 int mucse_mbx_fw_reset_phy(struct mucse_hw *hw);
 int mucse_fw_get_macaddr(struct mucse_hw *hw, int pfvfnum,
 			 u8 *mac_addr, int nr_lane);
-
+int mucse_mbx_link_event_enable(struct mucse_hw *hw, int enable);
+int mucse_mbx_ifup_down(struct mucse_hw *hw, int up);
 #endif /* _RNPGBE_MBX_FW_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 12/15] net: rnpgbe: Add link up handler
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (10 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 11/15] net: rnpgbe: Add setup hw ring-vector, true up/down hw Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 13/15] net: rnpgbe: Add base tx functions Dong Yibo
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize link status handler

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  55 ++++++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   |  19 +++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   | 138 +++++++++++++++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h    |   1 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c | 147 ++++++++++++++++++
 .../net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h |   1 +
 6 files changed, 359 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index c049952f41e8..5ca2ec73bbe7 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -25,6 +25,15 @@ enum rnpgbe_hw_type {
 	rnpgbe_hw_unknow,
 };
 
+enum speed_enum {
+	speed_10,
+	speed_100,
+	speed_1000,
+	speed_10000,
+	speed_25000,
+	speed_40000,
+};
+
 struct mucse_dma_info {
 	u8 __iomem *dma_base_addr;
 	u8 __iomem *dma_ring_addr;
@@ -120,6 +129,31 @@ struct mucse_mbx_operations {
 			 bool enable);
 };
 
+/* Flow Control Settings */
+enum mucse_fc_mode {
+	mucse_fc_none = 0,
+	mucse_fc_rx_pause,
+	mucse_fc_tx_pause,
+	mucse_fc_full,
+	mucse_fc_default
+};
+
+#define PAUSE_TX (0x1)
+#define PAUSE_RX (0x2)
+#define PAUSE_AUTO (0x10)
+#define ASYM_PAUSE BIT(11)
+#define SYM_PAUSE BIT(10)
+
+#define M_MAX_TRAFFIC_CLASS (4)
+/* Flow control parameters */
+struct mucse_fc_info {
+	u32 high_water[M_MAX_TRAFFIC_CLASS];
+	u32 low_water[M_MAX_TRAFFIC_CLASS];
+	u16 pause_time;
+	enum mucse_fc_mode current_mode;
+	enum mucse_fc_mode requested_mode;
+};
+
 struct mucse_mbx_stats {
 	u32 msgs_tx;
 	u32 msgs_rx;
@@ -185,6 +219,8 @@ struct mucse_hw_operations {
 	void (*set_irq_mode)(struct mucse_hw *hw, bool legacy);
 	void (*set_mbx_link_event)(struct mucse_hw *hw, int enable);
 	void (*set_mbx_ifup)(struct mucse_hw *hw, int enable);
+	void (*check_link)(struct mucse_hw *hw, u32 *speed, bool *link_up,
+			   bool *duplex);
 };
 
 enum {
@@ -223,6 +259,7 @@ struct mucse_hw {
 	struct mucse_dma_info dma;
 	struct mucse_eth_info eth;
 	struct mucse_mac_info mac;
+	struct mucse_fc_info fc;
 	struct mucse_mbx_info mbx;
 	struct mucse_addr_filter_info addr_ctrl;
 #define M_NET_FEATURE_SG ((u32)(1 << 0))
@@ -253,13 +290,17 @@ struct mucse_hw {
 	u16 max_msix_vectors;
 	int nr_lane;
 	struct lldp_status lldp_status;
+	int speed;
+	u32 duplex;
+	u32 tp_mdx;
 	int link;
 	u8 addr[ETH_ALEN];
 	u8 perm_addr[ETH_ALEN];
 };
 
 enum mucse_state_t {
-	__MMUCSE_TESTING,
+	__MUCSE_TESTING,
+	__MUCSE_RESETTING,
 	__MUCSE_DOWN,
 	__MUCSE_SERVICE_SCHED,
 	__MUCSE_PTP_TX_IN_PROGRESS,
@@ -547,6 +588,7 @@ struct mucse {
 	u32 priv_flags;
 #define M_PRIV_FLAG_TX_COALESCE ((u32)(1 << 25))
 #define M_PRIV_FLAG_RX_COALESCE ((u32)(1 << 26))
+#define M_PRIV_FLAG_LLDP ((u32)(1 << 27))
 	struct mucse_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
 	int tx_ring_item_count;
 	int num_tx_queues;
@@ -565,6 +607,9 @@ struct mucse {
 	u16 rx_frames;
 	u16 tx_frames;
 	u16 tx_usecs;
+	bool link_up;
+	u32 link_speed;
+	bool duplex;
 	unsigned long state;
 	unsigned long link_check_timeout;
 	struct timer_list service_timer;
@@ -613,9 +658,17 @@ static inline unsigned int mucse_rx_bufsz(struct mucse_ring *ring)
 #define M_PKT_TIMEOUT (30)
 #define M_RX_PKT_POLL_BUDGET (64)
 
+#define M_LINK_SPEED_UNKNOWN 0
+#define M_LINK_SPEED_10_FULL BIT(2)
+#define M_LINK_SPEED_100_FULL BIT(3)
+#define M_LINK_SPEED_1GB_FULL BIT(4)
+
+#define M_TRY_LINK_TIMEOUT (4 * HZ)
+
 #define m_rd_reg(reg) readl((void *)(reg))
 #define m_wr_reg(reg, val) writel((val), (void *)(reg))
 #define hw_wr32(hw, reg, val) m_wr_reg((hw)->hw_addr + (reg), (val))
+#define hw_rd32(hw, reg) m_rd_reg((hw)->hw_addr + (reg))
 #define dma_wr32(dma, reg, val) m_wr_reg((dma)->dma_base_addr + (reg), (val))
 #define dma_rd32(dma, reg) m_rd_reg((dma)->dma_base_addr + (reg))
 #define eth_wr32(eth, reg, val) m_wr_reg((eth)->eth_base_addr + (reg), (val))
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
index 7cc9134952bf..cb2448f497fe 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -300,6 +300,24 @@ static void rnpgbe_set_mbx_ifup_hw_ops_n500(struct mucse_hw *hw,
 	mucse_mbx_ifup_down(hw, enable);
 }
 
+static void rnpgbe_check_mac_link_hw_ops_n500(struct mucse_hw *hw,
+					      u32 *speed,
+					      bool *link_up,
+					      bool *duplex)
+{
+	if (hw->speed == 10)
+		*speed = M_LINK_SPEED_10_FULL;
+	else if (hw->speed == 100)
+		*speed = M_LINK_SPEED_100_FULL;
+	else if (hw->speed == 1000)
+		*speed = M_LINK_SPEED_1GB_FULL;
+	else
+		*speed = M_LINK_SPEED_UNKNOWN;
+
+	*link_up = !!hw->link;
+	*duplex = !!hw->duplex;
+}
+
 static struct mucse_hw_operations hw_ops_n500 = {
 	.init_hw = &rnpgbe_init_hw_ops_n500,
 	.reset_hw = &rnpgbe_reset_hw_ops_n500,
@@ -311,6 +329,7 @@ static struct mucse_hw_operations hw_ops_n500 = {
 	.set_irq_mode = &rnpgbe_set_irq_mode_n500,
 	.set_mbx_link_event = &rnpgbe_set_mbx_link_event_hw_ops_n500,
 	.set_mbx_ifup = &rnpgbe_set_mbx_ifup_hw_ops_n500,
+	.check_link = &rnpgbe_check_mac_link_hw_ops_n500,
 };
 
 /**
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index 01cff0a780ff..c2f53af3de09 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -84,12 +84,144 @@ static void rnpgbe_service_timer(struct timer_list *t)
 		rnpgbe_service_event_schedule(mucse);
 }
 
+static void rnpgbe_service_event_complete(struct mucse *mucse)
+{
+	/* flush memory to make sure state is correct before next watchdog */
+	smp_mb__before_atomic();
+	clear_bit(__MUCSE_SERVICE_SCHED, &mucse->state);
+}
+
+/**
+ * rnpgbe_watchdog_update_link - update the link status
+ * @mucse: pointer to the device adapter structure
+ **/
+static void rnpgbe_watchdog_update_link(struct mucse *mucse)
+{
+	struct mucse_hw *hw = &mucse->hw;
+	u32 link_speed = mucse->link_speed;
+	bool link_up;
+	bool duplex;
+	bool flow_rx = true, flow_tx = true;
+
+	if (!(mucse->flags & M_FLAG_NEED_LINK_UPDATE))
+		return;
+
+	if (hw->ops.check_link) {
+		hw->ops.check_link(hw, &link_speed, &link_up, &duplex);
+	} else {
+		/* always assume link is up, if no check link function */
+		link_speed = M_LINK_SPEED_1GB_FULL;
+		link_up = true;
+	}
+
+	if (link_up || time_after(jiffies, (mucse->link_check_timeout +
+					M_TRY_LINK_TIMEOUT))) {
+		mucse->flags &= ~M_FLAG_NEED_LINK_UPDATE;
+	}
+	mucse->link_up = link_up;
+	mucse->link_speed = link_speed;
+	mucse->duplex = duplex;
+
+	switch (hw->fc.current_mode) {
+	case mucse_fc_none:
+		flow_rx = false;
+		flow_tx = false;
+		break;
+	case mucse_fc_tx_pause:
+		flow_rx = false;
+		flow_tx = true;
+
+		break;
+	case mucse_fc_rx_pause:
+		flow_rx = true;
+		flow_tx = false;
+		break;
+
+	case mucse_fc_full:
+		flow_rx = true;
+		flow_tx = true;
+		break;
+	default:
+		flow_rx = false;
+		flow_tx = false;
+	}
+
+	if (mucse->link_up) {
+		e_info(drv, "NIC Link is Up %s, %s Duplex, Flow Control: %s\n",
+		       (link_speed == M_LINK_SPEED_1GB_FULL ? "1000 Mbps" :
+			(link_speed == M_LINK_SPEED_100_FULL ? "100 Mbps" :
+			 (link_speed == M_LINK_SPEED_10_FULL ? "10 Mbps" :
+			  "unknown speed"))),
+		       ((duplex) ? "Full" : "Half"),
+		       ((flow_rx && flow_tx) ? "RX/TX" :
+			(flow_rx ? "RX" : (flow_tx ? "TX" : "None"))));
+	}
+}
+
+/**
+ * rnpgbe_watchdog_link_is_up - update netif_carrier status and
+ * print link up message
+ * @mucse: pointer to the device adapter structure
+ **/
+static void rnpgbe_watchdog_link_is_up(struct mucse *mucse)
+{
+	struct net_device *netdev = mucse->netdev;
+
+	/* only continue if link was previously down */
+	if (netif_carrier_ok(netdev))
+		return;
+	netif_carrier_on(netdev);
+	netif_tx_wake_all_queues(netdev);
+}
+
+/**
+ * rnpgbe_watchdog_link_is_down - update netif_carrier status and
+ * print link down message
+ * @mucse: pointer to the adapter structure
+ **/
+static void rnpgbe_watchdog_link_is_down(struct mucse *mucse)
+{
+	struct net_device *netdev = mucse->netdev;
+
+	mucse->link_up = false;
+	mucse->link_speed = 0;
+	/* only continue if link was up previously */
+	if (!netif_carrier_ok(netdev))
+		return;
+	e_info(drv, "NIC Link is Down\n");
+	netif_carrier_off(netdev);
+	netif_tx_stop_all_queues(netdev);
+}
+
+/**
+ * rnpgbe_watchdog_subtask - check and bring link up
+ * @mucse: pointer to the device adapter structure
+ **/
+static void rnpgbe_watchdog_subtask(struct mucse *mucse)
+{
+	/* if interface is down do nothing */
+	/* should do link status if in sriov */
+	if (test_bit(__MUCSE_DOWN, &mucse->state) ||
+	    test_bit(__MUCSE_RESETTING, &mucse->state))
+		return;
+
+	rnpgbe_watchdog_update_link(mucse);
+	if (mucse->link_up)
+		rnpgbe_watchdog_link_is_up(mucse);
+	else
+		rnpgbe_watchdog_link_is_down(mucse);
+}
+
 /**
  * rnpgbe_service_task - manages and runs subtasks
  * @work: pointer to work_struct containing our data
  **/
 static void rnpgbe_service_task(struct work_struct *work)
 {
+	struct mucse *mucse = container_of(work, struct mucse, service_task);
+
+	rnpgbe_watchdog_subtask(mucse);
+	rnpgbe_service_event_complete(mucse);
 }
 
 int rnpgbe_poll(struct napi_struct *napi, int budget)
@@ -255,7 +387,7 @@ static int rnpgbe_open(struct net_device *netdev)
 	int err;
 
 	/* disallow open during test */
-	if (test_bit(__MMUCSE_TESTING, &mucse->state))
+	if (test_bit(__MUCSE_TESTING, &mucse->state))
 		return -EBUSY;
 
 	netif_carrier_off(netdev);
@@ -271,6 +403,8 @@ static int rnpgbe_open(struct net_device *netdev)
 	if (err)
 		goto err_set_queues;
 	rnpgbe_up_complete(mucse);
+
+	return 0;
 err_req_irq:
 	rnpgbe_free_txrx(mucse);
 err_set_queues:
@@ -387,6 +521,8 @@ static irqreturn_t rnpgbe_msix_other(int irq, void *data)
 	struct mucse *mucse = (struct mucse *)data;
 
 	set_bit(__MUCSE_IN_IRQ, &mucse->state);
+	/* handle fw req and ack */
+	rnpgbe_fw_msg_handler(mucse);
 	clear_bit(__MUCSE_IN_IRQ, &mucse->state);
 
 	return IRQ_HANDLED;
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
index fbb154051313..666896de1f9f 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx.h
@@ -8,6 +8,7 @@
 #define MUCSE_ERR_MBX -100
 /* 14 words */
 #define MUCSE_VFMAILBOX_SIZE 14
+#define MUCSE_FW_MAILBOX_SIZE MUCSE_VFMAILBOX_SIZE
 /* ================ PF <--> VF mailbox ================ */
 #define SHARE_MEM_BYTES 64
 static inline u32 PF_VF_SHM(struct mucse_mbx_info *mbx, int vf)
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
index fc6c0dbfff84..066bf450cf59 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.c
@@ -501,3 +501,150 @@ int mucse_mbx_ifup_down(struct mucse_hw *hw, int up)
 
 	return err;
 }
+
+static void rnpgbe_link_stat_mark(struct mucse_hw *hw, int up)
+{
+	u32 v;
+	struct mucse *mucse = (struct mucse *)hw->back;
+
+	v = hw_rd32(hw, DMA_DUMY);
+	v &= ~(0x0f000f11);
+	v |= 0xa0000000;
+	if (up) {
+		v |= BIT(0);
+		switch (hw->speed) {
+		case 10:
+			v |= (speed_10 << 8);
+			break;
+		case 100:
+			v |= (speed_100 << 8);
+			break;
+		case 1000:
+			v |= (speed_1000 << 8);
+			break;
+		case 10000:
+			v |= (speed_10000 << 8);
+			break;
+		case 25000:
+			v |= (speed_25000 << 8);
+			break;
+		case 40000:
+			v |= (speed_40000 << 8);
+			break;
+		}
+		v |= (hw->duplex << 4);
+		v |= (hw->fc.current_mode << 24);
+	} else {
+		v &= ~BIT(0);
+	}
+	/* we should update lldp_status */
+	if (hw->fw_version >= 0x00010500) {
+		if (mucse->priv_flags & M_PRIV_FLAG_LLDP)
+			v |= BIT(6);
+		else
+			v &= (~BIT(6));
+	}
+	hw_wr32(hw, DMA_DUMY, v);
+}
+
+static int rnpgbe_mbx_fw_reply_handler(struct mucse *adapter,
+				       struct mbx_fw_cmd_reply *reply)
+{
+	struct mbx_req_cookie *cookie;
+
+	cookie = reply->cookie;
+	if (!cookie || cookie->magic != COOKIE_MAGIC)
+		return -EIO;
+
+	if (cookie->priv_len > 0)
+		memcpy(cookie->priv, reply->data, cookie->priv_len);
+
+	cookie->done = 1;
+
+	if (reply->flags & FLAGS_ERR)
+		cookie->errcode = reply->error_code;
+	else
+		cookie->errcode = 0;
+	wake_up_interruptible(&cookie->wait);
+	return 0;
+}
+
+static int rnpgbe_mbx_fw_req_handler(struct mucse *mucse,
+				     struct mbx_fw_cmd_req *req)
+{
+	struct mucse_hw *hw = &mucse->hw;
+
+	switch (req->opcode) {
+	case LINK_STATUS_EVENT:
+		if (req->link_stat.lane_status)
+			hw->link = 1;
+		else
+			hw->link = 0;
+		if (hw->hw_type == rnpgbe_hw_n500 ||
+		    hw->hw_type == rnpgbe_hw_n210 ||
+		    hw->hw_type == rnpgbe_hw_n210L) {
+			/* fw_version more than 0.1.5.0 can up lldp_status */
+			if (hw->fw_version >= 0x00010500) {
+				if (req->link_stat.st[0].lldp_status)
+					mucse->priv_flags |= M_PRIV_FLAG_LLDP;
+				else
+					mucse->priv_flags &= (~M_PRIV_FLAG_LLDP);
+			}
+		}
+		if (req->link_stat.port_st_magic == SPEED_VALID_MAGIC) {
+			hw->speed = req->link_stat.st[0].speed;
+			hw->duplex = req->link_stat.st[0].duplex;
+			if (hw->hw_type == rnpgbe_hw_n500 ||
+			    hw->hw_type == rnpgbe_hw_n210 ||
+			    hw->hw_type == rnpgbe_hw_n210L) {
+				hw->fc.current_mode =
+					req->link_stat.st[0].pause;
+				hw->tp_mdx = req->link_stat.st[0].tp_mdx;
+			}
+		}
+		if (req->link_stat.lane_status)
+			rnpgbe_link_stat_mark(hw, 1);
+		else
+			rnpgbe_link_stat_mark(hw, 0);
+
+		mucse->flags |= M_FLAG_NEED_LINK_UPDATE;
+		break;
+	}
+	return 0;
+}
+
+static int rnpgbe_rcv_msg_from_fw(struct mucse *mucse)
+{
+	u32 msgbuf[MUCSE_FW_MAILBOX_SIZE];
+	struct mucse_hw *hw = &mucse->hw;
+	s32 retval;
+
+	retval = mucse_read_mbx(hw, msgbuf, MUCSE_FW_MAILBOX_SIZE, MBX_FW);
+	if (retval)
+		return retval;
+	/* this is a message we already processed, do nothing */
+	if (((unsigned short *)msgbuf)[0] & FLAGS_DD) {
+		return rnpgbe_mbx_fw_reply_handler(mucse,
+				(struct mbx_fw_cmd_reply *)msgbuf);
+	} else {
+		return rnpgbe_mbx_fw_req_handler(mucse,
+				(struct mbx_fw_cmd_req *)msgbuf);
+	}
+}
+
+static void rnpgbe_rcv_ack_from_fw(struct mucse *mucse)
+{
+	/* do-nothing */
+}
+
+int rnpgbe_fw_msg_handler(struct mucse *mucse)
+{
+	/* check fw-req */
+	if (!mucse_check_for_msg(&mucse->hw, MBX_FW))
+		rnpgbe_rcv_msg_from_fw(mucse);
+	/* process any acks */
+	if (!mucse_check_for_ack(&mucse->hw, MBX_FW))
+		rnpgbe_rcv_ack_from_fw(mucse);
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
index cd5a98acd983..2700eebf5873 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_mbx_fw.h
@@ -647,4 +647,5 @@ int mucse_fw_get_macaddr(struct mucse_hw *hw, int pfvfnum,
 			 u8 *mac_addr, int nr_lane);
 int mucse_mbx_link_event_enable(struct mucse_hw *hw, int enable);
 int mucse_mbx_ifup_down(struct mucse_hw *hw, int up);
+int rnpgbe_fw_msg_handler(struct mucse *mucse);
 #endif /* _RNPGBE_MBX_FW_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 13/15] net: rnpgbe: Add base tx functions
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (11 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 12/15] net: rnpgbe: Add link up handler Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 14/15] net: rnpgbe: Add base rx function Dong Yibo
  2025-07-03  1:48 ` [PATCH 15/15] net: rnpgbe: Add ITR for rx Dong Yibo
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize tx-map and tx clean functions

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  20 +-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_chip.c   |  56 ++-
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h |   6 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c    | 406 +++++++++++++++++-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h    |  14 +-
 .../net/ethernet/mucse/rnpgbe/rnpgbe_main.c   |  79 +++-
 6 files changed, 568 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 5ca2ec73bbe7..7871cb30db58 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -50,6 +50,7 @@ struct mucse_eth_operations {
 	s32 (*set_rar)(struct mucse_eth_info *eth, u32 index, u8 *addr);
 	s32 (*clear_rar)(struct mucse_eth_info *eth, u32 index);
 	void (*clr_mc_addr)(struct mucse_eth_info *eth);
+	void (*set_rx)(struct mucse_eth_info *eth, bool status);
 };
 
 #define RNPGBE_MAX_MTA 128
@@ -79,6 +80,7 @@ struct mucse_mac_info;
 
 struct mucse_mac_operations {
 	void (*set_mac)(struct mucse_mac_info *mac, u8 *addr, int index);
+	void (*set_mac_rx)(struct mucse_mac_info *mac, bool status);
 };
 
 struct mucse_mac_info {
@@ -221,6 +223,7 @@ struct mucse_hw_operations {
 	void (*set_mbx_ifup)(struct mucse_hw *hw, int enable);
 	void (*check_link)(struct mucse_hw *hw, u32 *speed, bool *link_up,
 			   bool *duplex);
+	void (*set_mac_rx)(struct mucse_hw *hw, bool status);
 };
 
 enum {
@@ -542,6 +545,8 @@ struct mucse_ring {
 
 struct mucse_ring_container {
 	struct mucse_ring *ring;
+	unsigned int total_bytes;
+	unsigned int total_packets;
 	u16 work_limit;
 	u16 count;
 };
@@ -623,11 +628,25 @@ struct rnpgbe_info {
 	void (*get_invariants)(struct mucse_hw *hw);
 };
 
+static inline u16 mucse_desc_unused(struct mucse_ring *ring)
+{
+	u16 ntc = ring->next_to_clean;
+	u16 ntu = ring->next_to_use;
+
+	return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1;
+}
+
 static inline struct netdev_queue *txring_txq(const struct mucse_ring *ring)
 {
 	return netdev_get_tx_queue(ring->netdev, ring->queue_index);
 }
 
+static inline __le64 build_ctob(u32 vlan_cmd, u32 mac_ip_len, u32 size)
+{
+	return cpu_to_le64(((u64)vlan_cmd << 32) | ((u64)mac_ip_len << 16) |
+			   ((u64)size));
+}
+
 #define M_RXBUFFER_1536 (1536)
 static inline unsigned int mucse_rx_bufsz(struct mucse_ring *ring)
 {
@@ -688,6 +707,5 @@ static inline unsigned int mucse_rx_bufsz(struct mucse_ring *ring)
 #define MUCSE_ERR_INVALID_ARGUMENT (-1)
 
 void rnpgbe_service_event_schedule(struct mucse *mucse);
-int rnpgbe_poll(struct napi_struct *napi, int budget);
 
 #endif /* _RNPGBE_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
index cb2448f497fe..edc3f697fa3a 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_chip.c
@@ -85,10 +85,23 @@ static void rnpgbe_eth_clr_mc_addr_n500(struct mucse_eth_info *eth)
 		eth_wr32(eth, RNPGBE_ETH_MUTICAST_HASH_TABLE(i), 0);
 }
 
+static void rnpgbe_eth_set_rx_n500(struct mucse_eth_info *eth,
+				   bool status)
+{
+	if (status) {
+		eth_wr32(eth, RNPGBE_ETH_EXCEPT_DROP_PROC, 0);
+		eth_wr32(eth, RNPGBE_ETH_TX_MUX_DROP, 0);
+	} else {
+		eth_wr32(eth, RNPGBE_ETH_EXCEPT_DROP_PROC, 1);
+		eth_wr32(eth, RNPGBE_ETH_TX_MUX_DROP, 1);
+	}
+}
+
 static struct mucse_eth_operations eth_ops_n500 = {
 	.set_rar = &rnpgbe_eth_set_rar_n500,
 	.clear_rar = &rnpgbe_eth_clear_rar_n500,
-	.clr_mc_addr = &rnpgbe_eth_clr_mc_addr_n500
+	.clr_mc_addr = &rnpgbe_eth_clr_mc_addr_n500,
+	.set_rx = &rnpgbe_eth_set_rx_n500,
 };
 
 /**
@@ -111,8 +124,31 @@ static void rnpgbe_mac_set_mac_n500(struct mucse_mac_info *mac,
 	mac_wr32(mac, RNPGBE_MAC_UNICAST_LOW(index), rar_low);
 }
 
+/**
+ * rnpgbe_mac_set_rx_n500 - Setup mac rx status
+ * @mac: pointer to mac structure
+ * @status: true for rx on/ false for rx off
+ *
+ * Setup mac rx status.
+ **/
+static void rnpgbe_mac_set_rx_n500(struct mucse_mac_info *mac,
+				   bool status)
+{
+	u32 value = mac_rd32(mac, R_MAC_CONTROL);
+
+	if (status)
+		value |= MAC_CONTROL_TE | MAC_CONTROL_RE;
+	else
+		value &= ~(MAC_CONTROL_RE);
+
+	mac_wr32(mac, R_MAC_CONTROL, value);
+	value = mac_rd32(mac, R_MAC_FRAME_FILTER);
+	mac_wr32(mac, R_MAC_FRAME_FILTER, value | 1);
+}
+
 static struct mucse_mac_operations mac_ops_n500 = {
 	.set_mac = &rnpgbe_mac_set_mac_n500,
+	.set_mac_rx = &rnpgbe_mac_set_rx_n500,
 };
 
 static int rnpgbe_init_hw_ops_n500(struct mucse_hw *hw)
@@ -318,6 +354,23 @@ static void rnpgbe_check_mac_link_hw_ops_n500(struct mucse_hw *hw,
 	*duplex = !!hw->duplex;
 }
 
+static void rnpgbe_set_mac_rx_hw_ops_n500(struct mucse_hw *hw, bool status)
+{
+	struct mucse_eth_info *eth = &hw->eth;
+	struct mucse_mac_info *mac = &hw->mac;
+
+	if (pci_channel_offline(hw->pdev))
+		return;
+
+	if (status) {
+		mac->ops.set_mac_rx(mac, status);
+		eth->ops.set_rx(eth, status);
+	} else {
+		eth->ops.set_rx(eth, status);
+		mac->ops.set_mac_rx(mac, status);
+	}
+}
+
 static struct mucse_hw_operations hw_ops_n500 = {
 	.init_hw = &rnpgbe_init_hw_ops_n500,
 	.reset_hw = &rnpgbe_reset_hw_ops_n500,
@@ -330,6 +383,7 @@ static struct mucse_hw_operations hw_ops_n500 = {
 	.set_mbx_link_event = &rnpgbe_set_mbx_link_event_hw_ops_n500,
 	.set_mbx_ifup = &rnpgbe_set_mbx_ifup_hw_ops_n500,
 	.check_link = &rnpgbe_check_mac_link_hw_ops_n500,
+	.set_mac_rx = &rnpgbe_set_mac_rx_hw_ops_n500,
 };
 
 /**
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
index 98031600801b..71a408c941e3 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_hw.h
@@ -26,6 +26,8 @@
 #define RING_VECTOR(n) (0x04 * (n))
 
 /* eth regs */
+#define RNPGBE_ETH_TX_MUX_DROP (0x98)
+#define RNPGBE_ETH_EXCEPT_DROP_PROC (0x0470)
 #define RNPGBE_ETH_BYPASS (0x8000)
 #define RNPGBE_HOST_FILTER_EN (0x800c)
 #define RNPGBE_REDIR_EN (0x8030)
@@ -43,6 +45,10 @@
 #define RNPGBE_LEGANCY_ENABLE (0xd004)
 #define RNPGBE_LEGANCY_TIME (0xd000)
 /* mac regs */
+#define R_MAC_CONTROL (0)
+#define MAC_CONTROL_TE (0x8)
+#define MAC_CONTROL_RE (0x4)
+#define R_MAC_FRAME_FILTER (0x4)
 #define M_RAH_AV 0x80000000
 #define RNPGBE_MAC_UNICAST_LOW(i) (0x44 + (i) * 0x08)
 #define RNPGBE_MAC_UNICAST_HIGH(i) (0x40 + (i) * 0x08)
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
index fec084e20513..1aab4cb0bbaa 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
@@ -131,6 +131,174 @@ static void mucse_add_ring(struct mucse_ring *ring,
 	head->count++;
 }
 
+/**
+ * rnpgbe_clean_tx_irq - Reclaim resources after transmit completes
+ * @q_vector: structure containing interrupt and ring information
+ * @tx_ring: tx ring to clean
+ * @napi_budget: how many packets driver is allowed to clean
+ **/
+static bool rnpgbe_clean_tx_irq(struct mucse_q_vector *q_vector,
+				struct mucse_ring *tx_ring,
+				int napi_budget)
+{
+	struct mucse *mucse = q_vector->mucse;
+	struct mucse_tx_buffer *tx_buffer;
+	struct rnpgbe_tx_desc *tx_desc;
+	u64 total_bytes = 0, total_packets = 0;
+	int budget = q_vector->tx.work_limit;
+	int i = tx_ring->next_to_clean;
+
+	if (test_bit(__MUCSE_DOWN, &mucse->state))
+		return true;
+
+	tx_ring->tx_stats.poll_count++;
+	tx_buffer = &tx_ring->tx_buffer_info[i];
+	tx_desc = M_TX_DESC(tx_ring, i);
+	i -= tx_ring->count;
+
+	do {
+		struct rnpgbe_tx_desc *eop_desc = tx_buffer->next_to_watch;
+
+		/* if next_to_watch is not set then there is no work pending */
+		if (!eop_desc)
+			break;
+
+		/* prevent any other reads prior to eop_desc */
+		rmb();
+
+		/* if eop DD is not set pending work has not been completed */
+		if (!(eop_desc->vlan_cmd & cpu_to_le32(M_TXD_STAT_DD)))
+			break;
+		/* clear next_to_watch to prevent false hangs */
+		tx_buffer->next_to_watch = NULL;
+
+		/* update the statistics for this packet */
+		total_bytes += tx_buffer->bytecount;
+		total_packets += tx_buffer->gso_segs;
+
+		/* free the skb */
+		napi_consume_skb(tx_buffer->skb, napi_budget);
+
+		/* unmap skb header data */
+		dma_unmap_single(tx_ring->dev, dma_unmap_addr(tx_buffer, dma),
+				 dma_unmap_len(tx_buffer, len), DMA_TO_DEVICE);
+
+		/* clear tx_buffer data */
+		tx_buffer->skb = NULL;
+		dma_unmap_len_set(tx_buffer, len, 0);
+
+		/* unmap remaining buffers */
+		while (tx_desc != eop_desc) {
+			tx_buffer++;
+			tx_desc++;
+			i++;
+			if (unlikely(!i)) {
+				i -= tx_ring->count;
+				tx_buffer = tx_ring->tx_buffer_info;
+				tx_desc = M_TX_DESC(tx_ring, 0);
+			}
+
+			/* unmap any remaining paged data */
+			if (dma_unmap_len(tx_buffer, len)) {
+				dma_unmap_page(tx_ring->dev,
+					       dma_unmap_addr(tx_buffer, dma),
+					       dma_unmap_len(tx_buffer, len),
+					       DMA_TO_DEVICE);
+				dma_unmap_len_set(tx_buffer, len, 0);
+			}
+			budget--;
+		}
+
+		/* move us one more past the eop_desc for start of next pkt */
+		tx_buffer++;
+		tx_desc++;
+		i++;
+		if (unlikely(!i)) {
+			i -= tx_ring->count;
+			tx_buffer = tx_ring->tx_buffer_info;
+			tx_desc = M_TX_DESC(tx_ring, 0);
+		}
+
+		/* issue prefetch for next Tx descriptor */
+		prefetch(tx_desc);
+
+		/* update budget accounting */
+		budget--;
+	} while (likely(budget > 0));
+	netdev_tx_completed_queue(txring_txq(tx_ring), total_packets,
+				  total_bytes);
+	i += tx_ring->count;
+	tx_ring->next_to_clean = i;
+	u64_stats_update_begin(&tx_ring->syncp);
+	tx_ring->stats.bytes += total_bytes;
+	tx_ring->stats.packets += total_packets;
+	tx_ring->tx_stats.tx_clean_count += total_packets;
+	tx_ring->tx_stats.tx_clean_times++;
+	if (tx_ring->tx_stats.tx_clean_times > 10) {
+		tx_ring->tx_stats.tx_clean_times = 0;
+		tx_ring->tx_stats.tx_clean_count = 0;
+	}
+
+	u64_stats_update_end(&tx_ring->syncp);
+	q_vector->tx.total_bytes += total_bytes;
+	q_vector->tx.total_packets += total_packets;
+	tx_ring->tx_stats.send_done_bytes += total_bytes;
+
+#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2)
+	if (likely(netif_carrier_ok(tx_ring->netdev) &&
+		   (mucse_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) {
+		/* Make sure that anybody stopping the queue after this
+		 * sees the new next_to_clean.
+		 */
+		smp_mb();
+		if (__netif_subqueue_stopped(tx_ring->netdev,
+					     tx_ring->queue_index) &&
+		    !test_bit(__MUCSE_DOWN, &mucse->state)) {
+			netif_wake_subqueue(tx_ring->netdev,
+					    tx_ring->queue_index);
+			++tx_ring->tx_stats.restart_queue;
+		}
+	}
+
+	return total_bytes == 0;
+}
+
+/**
+ * rnpgbe_poll - NAPI Rx polling callback
+ * @napi: structure for representing this polling device
+ * @budget: how many packets driver is allowed to clean
+ *
+ * This function is used for legacy and MSI, NAPI mode
+ **/
+static int rnpgbe_poll(struct napi_struct *napi, int budget)
+{
+	struct mucse_q_vector *q_vector =
+		container_of(napi, struct mucse_q_vector, napi);
+	struct mucse *mucse = q_vector->mucse;
+	struct mucse_ring *ring;
+	int work_done = 0;
+	bool clean_complete = true;
+
+	mucse_for_each_ring(ring, q_vector->tx)
+		clean_complete = rnpgbe_clean_tx_irq(q_vector, ring, budget);
+
+	if (!netif_running(mucse->netdev))
+		clean_complete = true;
+	/* force done */
+	if (test_bit(__MUCSE_DOWN, &mucse->state))
+		clean_complete = true;
+
+	if (!clean_complete)
+		return budget;
+	/* all work done, exit the polling mode */
+	if (likely(napi_complete_done(napi, work_done))) {
+		if (!test_bit(__MUCSE_DOWN, &mucse->state))
+			rnpgbe_irq_enable_queues(mucse, q_vector);
+	}
+
+	return min(work_done, budget - 1);
+}
+
 /**
  * rnpgbe_alloc_q_vector - Allocate memory for a single interrupt vector
  * @mucse: pointer to private structure
@@ -863,8 +1031,14 @@ static void rnpgbe_configure_tx_ring(struct mucse *mucse,
  **/
 void rnpgbe_configure_tx(struct mucse *mucse)
 {
-	u32 i;
+	u32 i, dma_axi_ctl;
+	struct mucse_hw *hw = &mucse->hw;
+	struct mucse_dma_info *dma = &hw->dma;
 
+	/* dma_axi_en.tx_en must be before Tx queues are enabled */
+	dma_axi_ctl = dma_rd32(dma, DMA_AXI_EN);
+	dma_axi_ctl |= TX_AXI_RW_EN;
+	dma_wr32(dma, DMA_AXI_EN, dma_axi_ctl);
 	/* Setup the HW Tx Head and Tail descriptor pointers */
 	for (i = 0; i < (mucse->num_tx_queues); i++)
 		rnpgbe_configure_tx_ring(mucse, mucse->tx_ring[i]);
@@ -955,21 +1129,47 @@ static void rnpgbe_configure_rx_ring(struct mucse *mucse,
 }
 
 /**
- * rnpgbe_configure_rx - Configure 8259x Receive Unit after Reset
+ * rnpgbe_configure_rx - Configure Receive Unit after Reset
  * @mucse: pointer to private structure
  *
  * Configure the Rx unit of the MAC after a reset.
  **/
 void rnpgbe_configure_rx(struct mucse *mucse)
 {
-	int i;
+	int i, dma_axi_ctl;
+	struct mucse_hw *hw = &mucse->hw;
+	struct mucse_dma_info *dma = &hw->dma;
 
 	for (i = 0; i < mucse->num_rx_queues; i++)
 		rnpgbe_configure_rx_ring(mucse, mucse->rx_ring[i]);
+
+	/* dma_axi_en.tx_en must be before Tx queues are enabled */
+	dma_axi_ctl = dma_rd32(dma, DMA_AXI_EN);
+	dma_axi_ctl |= RX_AXI_RW_EN;
+	dma_wr32(dma, DMA_AXI_EN, dma_axi_ctl);
+}
+
+/**
+ * rnpgbe_clean_all_tx_rings - Free Tx Buffers for all queues
+ * @adapter: board private structure
+ **/
+void rnpgbe_clean_all_tx_rings(struct mucse *mucse)
+{
+	int i;
+
+	for (i = 0; i < mucse->num_tx_queues; i++)
+		rnpgbe_clean_tx_ring(mucse->tx_ring[i]);
 }
 
 static irqreturn_t rnpgbe_msix_clean_rings(int irq, void *data)
 {
+	struct mucse_q_vector *q_vector = (struct mucse_q_vector *)data;
+
+	rnpgbe_irq_disable_queues(q_vector);
+
+	if (q_vector->rx.ring || q_vector->tx.ring)
+		napi_schedule_irqoff(&q_vector->napi);
+
 	return IRQ_HANDLED;
 }
 
@@ -1238,3 +1438,203 @@ void rnpgbe_configure_msix(struct mucse *mucse)
 		}
 	}
 }
+
+static void rnpgbe_unmap_and_free_tx_resource(struct mucse_ring *ring,
+					      struct mucse_tx_buffer *tx_buffer)
+{
+	if (tx_buffer->skb) {
+		dev_kfree_skb_any(tx_buffer->skb);
+		if (dma_unmap_len(tx_buffer, len))
+			dma_unmap_single(ring->dev,
+					 dma_unmap_addr(tx_buffer, dma),
+					 dma_unmap_len(tx_buffer, len),
+					 DMA_TO_DEVICE);
+	} else if (dma_unmap_len(tx_buffer, len)) {
+		dma_unmap_page(ring->dev, dma_unmap_addr(tx_buffer, dma),
+			       dma_unmap_len(tx_buffer, len), DMA_TO_DEVICE);
+	}
+	tx_buffer->next_to_watch = NULL;
+	tx_buffer->skb = NULL;
+	dma_unmap_len_set(tx_buffer, len, 0);
+}
+
+static int rnpgbe_tx_map(struct mucse_ring *tx_ring,
+			 struct mucse_tx_buffer *first, u32 mac_ip_len,
+			 u32 tx_flags)
+{
+	struct sk_buff *skb = first->skb;
+	struct mucse_tx_buffer *tx_buffer;
+	struct rnpgbe_tx_desc *tx_desc;
+	skb_frag_t *frag;
+	dma_addr_t dma;
+	unsigned int data_len, size;
+	u16 i = tx_ring->next_to_use;
+	u64 fun_id = ((u64)(tx_ring->pfvfnum) << (56));
+
+	tx_desc = M_TX_DESC(tx_ring, i);
+	size = skb_headlen(skb);
+	data_len = skb->data_len;
+	dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE);
+	tx_buffer = first;
+
+	for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+		if (dma_mapping_error(tx_ring->dev, dma))
+			goto dma_error;
+
+		/* record length, and DMA address */
+		dma_unmap_len_set(tx_buffer, len, size);
+		dma_unmap_addr_set(tx_buffer, dma, dma);
+
+		/* 1st desc */
+		tx_desc->pkt_addr = cpu_to_le64(dma | fun_id);
+
+		while (unlikely(size > M_MAX_DATA_PER_TXD)) {
+			tx_desc->vlan_cmd_bsz = build_ctob(tx_flags,
+							   mac_ip_len,
+							   M_MAX_DATA_PER_TXD);
+			i++;
+			tx_desc++;
+			if (i == tx_ring->count) {
+				tx_desc = M_TX_DESC(tx_ring, 0);
+				i = 0;
+			}
+			dma += M_MAX_DATA_PER_TXD;
+			size -= M_MAX_DATA_PER_TXD;
+			tx_desc->pkt_addr = cpu_to_le64(dma | fun_id);
+		}
+
+		if (likely(!data_len))
+			break;
+		tx_desc->vlan_cmd_bsz = build_ctob(tx_flags, mac_ip_len, size);
+		/* ==== frag== */
+		i++;
+		tx_desc++;
+		if (i == tx_ring->count) {
+			tx_desc = M_TX_DESC(tx_ring, 0);
+			i = 0;
+		}
+
+		size = skb_frag_size(frag);
+		data_len -= size;
+		dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size,
+				       DMA_TO_DEVICE);
+		tx_buffer = &tx_ring->tx_buffer_info[i];
+	}
+
+	/* write last descriptor with RS and EOP bits */
+	tx_desc->vlan_cmd_bsz = build_ctob(tx_flags | M_TXD_CMD_EOP | M_TXD_CMD_RS,
+					   mac_ip_len, size);
+	/* set the timestamp */
+	first->time_stamp = jiffies;
+	tx_ring->tx_stats.send_bytes += first->bytecount;
+
+	/*
+	 * Force memory writes to complete before letting h/w know there
+	 * are new descriptors to fetch.  (Only applicable for weak-ordered
+	 * memory model archs, such as IA-64).
+	 *
+	 * We also need this memory barrier to make certain all of the
+	 * status bits have been updated before next_to_watch is written.
+	 */
+	/* timestamp the skb as late as possible, just prior to notifying
+	 * the MAC that it should transmit this packet
+	 */
+	wmb();
+	/* set next_to_watch value indicating a packet is present */
+	first->next_to_watch = tx_desc;
+	i++;
+	if (i == tx_ring->count)
+		i = 0;
+	tx_ring->next_to_use = i;
+	skb_tx_timestamp(skb);
+	netdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);
+	/* notify HW of packet */
+	m_wr_reg(tx_ring->tail, i);
+	return 0;
+dma_error:
+	/* clear dma mappings for failed tx_buffer_info map */
+	for (;;) {
+		tx_buffer = &tx_ring->tx_buffer_info[i];
+		rnpgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer);
+		if (tx_buffer == first)
+			break;
+		if (i == 0)
+			i += tx_ring->count;
+		i--;
+	}
+	dev_kfree_skb_any(first->skb);
+	first->skb = NULL;
+	tx_ring->next_to_use = i;
+
+	return -1;
+}
+
+static int __rnpgbe_maybe_stop_tx(struct mucse_ring *tx_ring, u16 size)
+{
+	netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+	/* Herbert's original patch had:
+	 *  smp_mb__after_netif_stop_queue();
+	 * but since that doesn't exist yet, just open code it.
+	 */
+	smp_mb();
+
+	/* We need to check again in a case another CPU has just
+	 * made room available.
+	 */
+	if (likely(mucse_desc_unused(tx_ring) < size))
+		return -EBUSY;
+
+	/* A reprieve! - use start_queue because it doesn't call schedule */
+	netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+	++tx_ring->tx_stats.restart_queue;
+
+	return 0;
+}
+
+static inline int rnpgbe_maybe_stop_tx(struct mucse_ring *tx_ring, u16 size)
+{
+	if (likely(mucse_desc_unused(tx_ring) >= size))
+		return 0;
+	return __rnpgbe_maybe_stop_tx(tx_ring, size);
+}
+
+netdev_tx_t rnpgbe_xmit_frame_ring(struct sk_buff *skb,
+				   struct mucse *mucse,
+				   struct mucse_ring *tx_ring)
+{
+	struct mucse_tx_buffer *first;
+	u16 count = TXD_USE_COUNT(skb_headlen(skb));
+	/* keep it not zero */
+	u32 mac_ip_len = 20;
+	u32 tx_flags = 0;
+	unsigned short f;
+
+	for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) {
+		skb_frag_t *frag_temp = &skb_shinfo(skb)->frags[f];
+
+		count += TXD_USE_COUNT(skb_frag_size(frag_temp));
+	}
+
+	if (rnpgbe_maybe_stop_tx(tx_ring, count + 3)) {
+		tx_ring->tx_stats.tx_busy++;
+		return NETDEV_TX_BUSY;
+	}
+
+	/* record the location of the first descriptor for this packet */
+	first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
+	first->skb = skb;
+	/* maybe consider len smaller than 60 */
+	first->bytecount = (skb->len > 60) ? skb->len : 60;
+	first->gso_segs = 1;
+	first->priv_tags = 0;
+	first->mss_len_vf_num = 0;
+	first->inner_vlan_tunnel_len = 0;
+	first->ctx_flag = false;
+
+	if (rnpgbe_tx_map(tx_ring, first, mac_ip_len, tx_flags))
+		goto skip_check;
+	rnpgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
+
+skip_check:
+	return NETDEV_TX_OK;
+}
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
index 65bd97c26eaf..7179e5ebfbf0 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
@@ -8,6 +8,9 @@
 
 #define RING_OFFSET(n) (0x100 * (n))
 #define DMA_DUMY (0xc)
+#define DMA_AXI_EN (0x10)
+#define RX_AXI_RW_EN (0x03 << 0)
+#define TX_AXI_RW_EN (0x03 << 2)
 #define DMA_RX_START (0x10)
 #define DMA_RX_READY (0x14)
 #define DMA_TX_START (0x18)
@@ -52,6 +55,12 @@
 #define e_info(msglvl, format, arg...)  \
 	netif_info(mucse, msglvl, mucse->netdev, format, ##arg)
 
+/* now tx max 4k for one desc */
+#define M_MAX_TXD_PWR 12
+#define M_MAX_DATA_PER_TXD (0x1 << M_MAX_TXD_PWR)
+#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), M_MAX_DATA_PER_TXD)
+#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
+
 enum link_event_mask {
 	EVT_LINK_UP = 1,
 	EVT_NO_MEDIA = 2,
@@ -119,6 +128,7 @@ void rnpgbe_clear_interrupt_scheme(struct mucse *mucse);
 int rnpgbe_setup_txrx(struct mucse *mucse);
 void rnpgbe_free_txrx(struct mucse *mucse);
 void rnpgbe_configure_tx(struct mucse *mucse);
+void rnpgbe_clean_all_tx_rings(struct mucse *mucse);
 void rnpgbe_disable_rx_queue(struct mucse_ring *ring);
 void rnpgbe_configure_rx(struct mucse *mucse);
 int rnpgbe_request_irq(struct mucse *mucse);
@@ -126,5 +136,7 @@ void rnpgbe_free_irq(struct mucse *mucse);
 void rnpgbe_napi_enable_all(struct mucse *mucse);
 void rnpgbe_napi_disable_all(struct mucse *mucse);
 void rnpgbe_configure_msix(struct mucse *mucse);
-
+netdev_tx_t rnpgbe_xmit_frame_ring(struct sk_buff *skb,
+				   struct mucse *mucse,
+				   struct mucse_ring *tx_ring);
 #endif /* _RNPGBE_LIB_H */
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
index c2f53af3de09..ea41a758ac49 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_main.c
@@ -166,12 +166,14 @@ static void rnpgbe_watchdog_update_link(struct mucse *mucse)
 static void rnpgbe_watchdog_link_is_up(struct mucse *mucse)
 {
 	struct net_device *netdev = mucse->netdev;
+	struct mucse_hw *hw = &mucse->hw;
 
 	/* only continue if link was previously down */
 	if (netif_carrier_ok(netdev))
 		return;
 	netif_carrier_on(netdev);
 	netif_tx_wake_all_queues(netdev);
+	hw->ops.set_mac_rx(hw, true);
 }
 
 /**
@@ -182,6 +184,7 @@ static void rnpgbe_watchdog_link_is_up(struct mucse *mucse)
 static void rnpgbe_watchdog_link_is_down(struct mucse *mucse)
 {
 	struct net_device *netdev = mucse->netdev;
+	struct mucse_hw *hw = &mucse->hw;
 
 	mucse->link_up = false;
 	mucse->link_speed = 0;
@@ -191,6 +194,7 @@ static void rnpgbe_watchdog_link_is_down(struct mucse *mucse)
 	e_info(drv, "NIC Link is Down\n");
 	netif_carrier_off(netdev);
 	netif_tx_stop_all_queues(netdev);
+	hw->ops.set_mac_rx(hw, false);
 }
 
 /**
@@ -224,11 +228,6 @@ static void rnpgbe_service_task(struct work_struct *work)
 	rnpgbe_service_event_complete(mucse);
 }
 
-int rnpgbe_poll(struct napi_struct *napi, int budget)
-{
-	return 0;
-}
-
 /**
  * rnpgbe_check_fw_from_flash - Check chip-id and bin-id
  * @hw: hardware structure
@@ -418,6 +417,7 @@ static void rnpgbe_down(struct mucse *mucse)
 	struct net_device *netdev = mucse->netdev;
 
 	set_bit(__MUCSE_DOWN, &mucse->state);
+	hw->ops.set_mac_rx(hw, false);
 	hw->ops.set_mbx_link_event(hw, 0);
 	hw->ops.set_mbx_ifup(hw, 0);
 	if (netif_carrier_ok(netdev))
@@ -425,6 +425,7 @@ static void rnpgbe_down(struct mucse *mucse)
 	netif_tx_stop_all_queues(netdev);
 	netif_carrier_off(netdev);
 	rnpgbe_irq_disable(mucse);
+
 	netif_tx_disable(netdev);
 	rnpgbe_napi_disable_all(mucse);
 	mucse->flags &= ~M_FLAG_NEED_LINK_UPDATE;
@@ -453,14 +454,78 @@ static int rnpgbe_close(struct net_device *netdev)
 static netdev_tx_t rnpgbe_xmit_frame(struct sk_buff *skb,
 				     struct net_device *netdev)
 {
-	dev_kfree_skb_any(skb);
-	return NETDEV_TX_OK;
+	struct mucse *mucse = netdev_priv(netdev);
+	struct mucse_ring *tx_ring;
+
+	if (!netif_carrier_ok(netdev)) {
+		dev_kfree_skb_any(skb);
+		return NETDEV_TX_OK;
+	}
+	if (skb->len < 33) {
+		if (skb_padto(skb, 33))
+			return NETDEV_TX_OK;
+		skb->len = 33;
+	}
+	if (skb->len > 65535) {
+		dev_kfree_skb_any(skb);
+		return NETDEV_TX_OK;
+	}
+	tx_ring = mucse->tx_ring[skb->queue_mapping];
+	return rnpgbe_xmit_frame_ring(skb, mucse, tx_ring);
+}
+
+static void rnpgbe_get_stats64(struct net_device *netdev,
+			       struct rtnl_link_stats64 *stats)
+{
+	struct mucse *mucse = netdev_priv(netdev);
+	int i;
+
+	rcu_read_lock();
+	for (i = 0; i < mucse->num_rx_queues; i++) {
+		struct mucse_ring *ring = READ_ONCE(mucse->rx_ring[i]);
+		u64 bytes, packets;
+		unsigned int start;
+
+		if (ring) {
+			do {
+				start = u64_stats_fetch_begin(&ring->syncp);
+				packets = ring->stats.packets;
+				bytes = ring->stats.bytes;
+			} while (u64_stats_fetch_retry(&ring->syncp, start));
+			stats->rx_packets += packets;
+			stats->rx_bytes += bytes;
+		}
+	}
+
+	for (i = 0; i < mucse->num_tx_queues; i++) {
+		struct mucse_ring *ring = READ_ONCE(mucse->tx_ring[i]);
+		u64 bytes, packets;
+		unsigned int start;
+
+		if (ring) {
+			do {
+				start = u64_stats_fetch_begin(&ring->syncp);
+				packets = ring->stats.packets;
+				bytes = ring->stats.bytes;
+			} while (u64_stats_fetch_retry(&ring->syncp, start));
+			stats->tx_packets += packets;
+			stats->tx_bytes += bytes;
+		}
+	}
+	rcu_read_unlock();
+	/* following stats updated by rnpgbe_watchdog_task() */
+	stats->multicast = netdev->stats.multicast;
+	stats->rx_errors = netdev->stats.rx_errors;
+	stats->rx_length_errors = netdev->stats.rx_length_errors;
+	stats->rx_crc_errors = netdev->stats.rx_crc_errors;
+	stats->rx_missed_errors = netdev->stats.rx_missed_errors;
 }
 
 const struct net_device_ops rnpgbe_netdev_ops = {
 	.ndo_open = rnpgbe_open,
 	.ndo_stop = rnpgbe_close,
 	.ndo_start_xmit = rnpgbe_xmit_frame,
+	.ndo_get_stats64 = rnpgbe_get_stats64,
 };
 
 static void rnpgbe_assign_netdev_ops(struct net_device *dev)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 14/15] net: rnpgbe: Add base rx function
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (12 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 13/15] net: rnpgbe: Add base tx functions Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  2025-07-03  1:48 ` [PATCH 15/15] net: rnpgbe: Add ITR for rx Dong Yibo
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize rx clean function.

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  22 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c    | 586 ++++++++++++++++--
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.h    |   3 +-
 3 files changed, 575 insertions(+), 36 deletions(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 7871cb30db58..0b6ba4c3a6cb 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -361,6 +361,7 @@ struct mucse_rx_queue_stats {
 	u64 rx_equal_count;
 	u64 rx_clean_times;
 	u64 rx_clean_count;
+	u64 rx_resync;
 };
 
 union rnpgbe_rx_desc {
@@ -496,6 +497,7 @@ struct mucse_ring {
 	struct mucse_q_vector *q_vector;
 	struct net_device *netdev;
 	struct device *dev;
+	struct page_pool *page_pool;
 	void *desc;
 	union {
 		struct mucse_tx_buffer *tx_buffer_info;
@@ -587,6 +589,7 @@ struct mucse {
 #define M_FLAG_NEED_LINK_UPDATE ((u32)(1 << 0))
 #define M_FLAG_MSIX_ENABLED ((u32)(1 << 1))
 #define M_FLAG_MSI_ENABLED ((u32)(1 << 2))
+#define M_FLAG_SRIOV_ENABLED ((u32)(1 << 23))
 	u32 flags2;
 #define M_FLAG2_NO_NET_REG ((u32)(1 << 0))
 #define M_FLAG2_INSMOD ((u32)(1 << 1))
@@ -636,6 +639,14 @@ static inline u16 mucse_desc_unused(struct mucse_ring *ring)
 	return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1;
 }
 
+static inline u16 mucse_desc_unused_rx(struct mucse_ring *ring)
+{
+	u16 ntc = ring->next_to_clean;
+	u16 ntu = ring->next_to_use;
+
+	return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 16;
+}
+
 static inline struct netdev_queue *txring_txq(const struct mucse_ring *ring)
 {
 	return netdev_get_tx_queue(ring->netdev, ring->queue_index);
@@ -647,12 +658,22 @@ static inline __le64 build_ctob(u32 vlan_cmd, u32 mac_ip_len, u32 size)
 			   ((u64)size));
 }
 
+#define M_RXBUFFER_256 (256)
 #define M_RXBUFFER_1536 (1536)
 static inline unsigned int mucse_rx_bufsz(struct mucse_ring *ring)
 {
 	return (M_RXBUFFER_1536 - NET_IP_ALIGN);
 }
 
+#define M_RX_HDR_SIZE M_RXBUFFER_256
+
+/* rnpgbe_test_staterr - tests bits in Rx descriptor status and error fields */
+static inline __le16 rnpgbe_test_staterr(union rnpgbe_rx_desc *rx_desc,
+					 const u16 stat_err_bits)
+{
+	return rx_desc->wb.cmd & cpu_to_le16(stat_err_bits);
+}
+
 #define M_TX_DESC(R, i) (&(((struct rnpgbe_tx_desc *)((R)->desc))[i]))
 #define M_RX_DESC(R, i) (&(((union rnpgbe_rx_desc *)((R)->desc))[i]))
 
@@ -684,6 +705,7 @@ static inline unsigned int mucse_rx_bufsz(struct mucse_ring *ring)
 
 #define M_TRY_LINK_TIMEOUT (4 * HZ)
 
+#define M_RX_BUFFER_WRITE (16)
 #define m_rd_reg(reg) readl((void *)(reg))
 #define m_wr_reg(reg, val) writel((val), (void *)(reg))
 #define hw_wr32(hw, reg, val) m_wr_reg((hw)->hw_addr + (reg), (val))
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
index 1aab4cb0bbaa..05073663ad0e 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
@@ -2,10 +2,15 @@
 /* Copyright(c) 2020 - 2025 Mucse Corporation. */
 
 #include <linux/vmalloc.h>
+#include <net/page_pool/helpers.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
 
 #include "rnpgbe.h"
 #include "rnpgbe_lib.h"
 
+static bool rnpgbe_alloc_rx_buffers(struct mucse_ring *rx_ring,
+				    u16 cleaned_count);
 /**
  * rnpgbe_set_rss_queues - Allocate queues for RSS
  * @mucse: pointer to private structure
@@ -263,6 +268,419 @@ static bool rnpgbe_clean_tx_irq(struct mucse_q_vector *q_vector,
 	return total_bytes == 0;
 }
 
+#if (PAGE_SIZE < 8192)
+static inline int rnpgbe_compute_pad(int rx_buf_len)
+{
+	int page_size, pad_size;
+
+	page_size = ALIGN(rx_buf_len, PAGE_SIZE / 2);
+	pad_size = SKB_WITH_OVERHEAD(page_size) - rx_buf_len;
+
+	return pad_size;
+}
+
+static inline int rnpgbe_skb_pad(void)
+{
+	int rx_buf_len = M_RXBUFFER_1536;
+
+	return rnpgbe_compute_pad(rx_buf_len);
+}
+
+#define RNP_SKB_PAD rnpgbe_skb_pad()
+
+static inline int rnpgbe_sg_size(void)
+{
+	int sg_size = SKB_WITH_OVERHEAD(PAGE_SIZE / 2) - RNP_SKB_PAD;
+
+	sg_size -= NET_IP_ALIGN;
+	sg_size = ALIGN_DOWN(sg_size, 4);
+
+	return sg_size;
+}
+
+#define SG_SIZE  rnpgbe_sg_size()
+
+static inline unsigned int rnpgbe_rx_offset(void)
+{
+	return RNP_SKB_PAD;
+}
+
+#else /* PAGE_SIZE < 8192 */
+#define RNP_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
+#endif
+
+static struct mucse_rx_buffer *rnpgbe_get_rx_buffer(struct mucse_ring *rx_ring,
+						    union rnpgbe_rx_desc *rx_desc,
+						    struct sk_buff **skb,
+						    const unsigned int size)
+{
+	struct mucse_rx_buffer *rx_buffer;
+	int time = 0;
+	u16 *data;
+
+	rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
+	data = page_address(rx_buffer->page) + rx_buffer->page_offset;
+	*skb = rx_buffer->skb;
+
+	prefetchw(page_address(rx_buffer->page) + rx_buffer->page_offset);
+
+	/* we are reusing so sync this buffer for CPU use */
+try_sync:
+	dma_sync_single_range_for_cpu(rx_ring->dev, rx_buffer->dma,
+				      rx_buffer->page_offset, size,
+				      DMA_FROM_DEVICE);
+
+	if ((*data == CHECK_DATA) && time < 4) {
+		time++;
+		udelay(5);
+		rx_ring->rx_stats.rx_resync++;
+		goto try_sync;
+	}
+
+	return rx_buffer;
+}
+
+static void rnpgbe_add_rx_frag(struct mucse_ring *rx_ring,
+			       struct mucse_rx_buffer *rx_buffer,
+			       struct sk_buff *skb,
+			       unsigned int size)
+{
+#if (PAGE_SIZE < 8192)
+	unsigned int truesize = PAGE_SIZE / 2;
+#else
+	unsigned int truesize = SKB_DATA_ALIGN(RNP_SKB_PAD + size) :
+#endif
+
+	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,
+			rx_buffer->page_offset, size, truesize);
+}
+
+static struct sk_buff *rnpgbe_build_skb(struct mucse_ring *rx_ring,
+					struct mucse_rx_buffer *rx_buffer,
+					union rnpgbe_rx_desc *rx_desc,
+					unsigned int size)
+{
+	void *va = page_address(rx_buffer->page) + rx_buffer->page_offset;
+	unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +
+				SKB_DATA_ALIGN(size + RNP_SKB_PAD);
+	struct sk_buff *skb;
+
+	net_prefetch(va);
+	/* build an skb around the page buffer */
+	skb = build_skb(va - RNP_SKB_PAD, truesize);
+	if (unlikely(!skb))
+		return NULL;
+
+	/* update pointers within the skb to store the data */
+	skb_reserve(skb, RNP_SKB_PAD);
+	__skb_put(skb, size);
+
+	skb_mark_for_recycle(skb);
+
+	return skb;
+}
+
+static void rnpgbe_put_rx_buffer(struct mucse_ring *rx_ring,
+				 struct mucse_rx_buffer *rx_buffer,
+				 struct sk_buff *skb)
+{
+	/* clear contents of rx_buffer */
+	rx_buffer->page = NULL;
+	rx_buffer->skb = NULL;
+}
+
+/**
+ * rnpgbe_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ *
+ * This function updates next to clean.  If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
+ **/
+static bool rnpgbe_is_non_eop(struct mucse_ring *rx_ring,
+			      union rnpgbe_rx_desc *rx_desc,
+			      struct sk_buff *skb)
+{
+	u32 ntc = rx_ring->next_to_clean + 1;
+
+	/* fetch, update, and store next to clean */
+	ntc = (ntc < rx_ring->count) ? ntc : 0;
+	rx_ring->next_to_clean = ntc;
+
+	prefetch(M_RX_DESC(rx_ring, ntc));
+
+	/* if we are the last buffer then there is nothing else to do */
+	if (likely(rnpgbe_test_staterr(rx_desc, M_RXD_STAT_EOP)))
+		return false;
+	/* place skb in next buffer to be received */
+	rx_ring->rx_buffer_info[ntc].skb = skb;
+	rx_ring->rx_stats.non_eop_descs++;
+	/* we should clean it since we used all info in it */
+	rx_desc->wb.cmd = 0;
+
+	return true;
+}
+
+static void rnpgbe_pull_tail(struct sk_buff *skb)
+{
+	skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+	unsigned char *va;
+	unsigned int pull_len;
+
+	/*
+	 * it is valid to use page_address instead of kmap since we are
+	 * working with pages allocated out of the lomem pool per
+	 * alloc_page(GFP_ATOMIC)
+	 */
+	va = skb_frag_address(frag);
+
+	/*
+	 * we need the header to contain the greater of either ETH_HLEN or
+	 * 60 bytes if the skb->len is less than 60 for skb_pad.
+	 */
+	pull_len = eth_get_headlen(skb->dev, va, M_RX_HDR_SIZE);
+
+	/* align pull length to size of long to optimize memcpy performance */
+	skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
+
+	/* update all of the pointers */
+	skb_frag_size_sub(frag, pull_len);
+	skb_frag_off_add(frag, pull_len);
+	skb->data_len -= pull_len;
+	skb->tail += pull_len;
+}
+
+static bool rnpgbe_cleanup_headers(struct mucse_ring __maybe_unused *rx_ring,
+				   union rnpgbe_rx_desc *rx_desc,
+				   struct sk_buff *skb)
+{
+	if (IS_ERR(skb))
+		return true;
+	/* place header in linear portion of buffer */
+	if (!skb_headlen(skb))
+		rnpgbe_pull_tail(skb);
+	/* if eth_skb_pad returns an error the skb was freed */
+	/* will padding skb->len to 60 */
+	if (eth_skb_pad(skb))
+		return true;
+
+	return false;
+}
+
+static inline void rnpgbe_rx_hash(struct mucse_ring *ring,
+				  union rnpgbe_rx_desc *rx_desc,
+				  struct sk_buff *skb)
+{
+	int rss_type;
+
+	if (!(ring->netdev->features & NETIF_F_RXHASH))
+		return;
+#define M_RSS_TYPE_MASK 0xc0
+	rss_type = rx_desc->wb.cmd & M_RSS_TYPE_MASK;
+	skb_set_hash(skb, le32_to_cpu(rx_desc->wb.rss_hash),
+		     rss_type ? PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3);
+}
+
+/**
+ * rnpgbe_rx_checksum - indicate in skb if hw indicated a good cksum
+ * @ring: structure containing ring specific data
+ * @rx_desc: current Rx descriptor being processed
+ * @skb: skb currently being received and modified
+ **/
+static inline void rnpgbe_rx_checksum(struct mucse_ring *ring,
+				      union rnpgbe_rx_desc *rx_desc,
+				      struct sk_buff *skb)
+{
+	skb_checksum_none_assert(skb);
+	/* Rx csum disabled */
+	if (!(ring->netdev->features & NETIF_F_RXCSUM))
+		return;
+
+	/* if outer L3/L4  error */
+	/* must in promisc mode or rx-all mode */
+	if (rnpgbe_test_staterr(rx_desc, M_RXD_STAT_ERR_MASK))
+		return;
+	ring->rx_stats.csum_good++;
+	/* at least it is a ip packet which has ip checksum */
+
+	/* It must be a TCP or UDP packet with a valid checksum */
+	skb->ip_summed = CHECKSUM_UNNECESSARY;
+}
+
+static inline int ignore_veb_vlan(struct mucse *mucse,
+				  union rnpgbe_rx_desc *rx_desc)
+{
+	if (unlikely((mucse->flags & M_FLAG_SRIOV_ENABLED) &&
+		     (cpu_to_le16(rx_desc->wb.rev1) & VEB_VF_IGNORE_VLAN))) {
+		return 1;
+	}
+	return 0;
+}
+
+static inline __le16 rnpgbe_test_ext_cmd(union rnpgbe_rx_desc *rx_desc,
+					 const u16 stat_err_bits)
+{
+	return rx_desc->wb.rev1 & cpu_to_le16(stat_err_bits);
+}
+
+static void rnpgbe_process_skb_fields(struct mucse_ring *rx_ring,
+				      union rnpgbe_rx_desc *rx_desc,
+				      struct sk_buff *skb)
+{
+	struct net_device *dev = rx_ring->netdev;
+	struct mucse *mucse = netdev_priv(dev);
+
+	rnpgbe_rx_hash(rx_ring, rx_desc, skb);
+	rnpgbe_rx_checksum(rx_ring, rx_desc, skb);
+
+	if (((dev->features & NETIF_F_HW_VLAN_CTAG_RX) ||
+	     (dev->features & NETIF_F_HW_VLAN_STAG_RX)) &&
+	    rnpgbe_test_staterr(rx_desc, M_RXD_STAT_VLAN_VALID) &&
+	    !ignore_veb_vlan(mucse, rx_desc)) {
+		if (rnpgbe_test_ext_cmd(rx_desc, REV_OUTER_VLAN)) {
+			u16 vid_inner = le16_to_cpu(rx_desc->wb.vlan);
+			u16 vid_outer;
+			u16 vlan_tci = htons(ETH_P_8021Q);
+
+			__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+					       vid_inner);
+			/* check outer vlan type */
+			if (rnpgbe_test_staterr(rx_desc, M_RXD_STAT_STAG))
+				vlan_tci = htons(ETH_P_8021AD);
+			else
+				vlan_tci = htons(ETH_P_8021Q);
+			vid_outer = le16_to_cpu(rx_desc->wb.mark);
+			/* push outer */
+			skb = __vlan_hwaccel_push_inside(skb);
+			__vlan_hwaccel_put_tag(skb, vlan_tci, vid_outer);
+		} else {
+			/* only inner vlan */
+			u16 vid = le16_to_cpu(rx_desc->wb.vlan);
+			/* check vlan type */
+			if (rnpgbe_test_staterr(rx_desc, M_RXD_STAT_STAG)) {
+				__vlan_hwaccel_put_tag(skb,
+						       htons(ETH_P_8021AD),
+						       vid);
+			} else {
+				__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+						       vid);
+			}
+		}
+		rx_ring->rx_stats.vlan_remove++;
+	}
+	skb_record_rx_queue(skb, rx_ring->queue_index);
+	skb->protocol = eth_type_trans(skb, dev);
+}
+
+/**
+ * rnpgbe_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @q_vector: structure containing interrupt and ring information
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing.  The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
+ *
+ * Returns amount of work completed.
+ **/
+static int rnpgbe_clean_rx_irq(struct mucse_q_vector *q_vector,
+			       struct mucse_ring *rx_ring,
+			       int budget)
+{
+	unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+	unsigned int driver_drop_packets = 0;
+	u16 cleaned_count = mucse_desc_unused_rx(rx_ring);
+	bool fail_alloc = false;
+
+	while (likely(total_rx_packets < budget)) {
+		union rnpgbe_rx_desc *rx_desc;
+		struct mucse_rx_buffer *rx_buffer;
+		struct sk_buff *skb;
+		unsigned int size;
+
+		/* return some buffers to hardware, one at a time is too slow */
+		if (cleaned_count >= M_RX_BUFFER_WRITE) {
+			fail_alloc = rnpgbe_alloc_rx_buffers(rx_ring, cleaned_count) || fail_alloc;
+			cleaned_count = 0;
+		}
+		rx_desc = M_RX_DESC(rx_ring, rx_ring->next_to_clean);
+
+		if (!rnpgbe_test_staterr(rx_desc, M_RXD_STAT_DD))
+			break;
+
+		/* This memory barrier is needed to keep us from reading
+		 * any other fields out of the rx_desc until we know the
+		 * descriptor has been written back
+		 */
+		dma_rmb();
+		size = le16_to_cpu(rx_desc->wb.len);
+		if (!size)
+			break;
+
+		rx_buffer = rnpgbe_get_rx_buffer(rx_ring, rx_desc, &skb, size);
+
+		if (skb)
+			rnpgbe_add_rx_frag(rx_ring, rx_buffer, skb, size);
+		else
+			skb = rnpgbe_build_skb(rx_ring, rx_buffer, rx_desc,
+					       size);
+		/* exit if we failed to retrieve a buffer */
+		if (!skb) {
+			page_pool_recycle_direct(rx_ring->page_pool,
+						 rx_buffer->page);
+			rx_ring->rx_stats.alloc_rx_buff_failed++;
+			break;
+		}
+
+		rnpgbe_put_rx_buffer(rx_ring, rx_buffer, skb);
+		cleaned_count++;
+
+		/* place incomplete frames back on ring for completion */
+		if (rnpgbe_is_non_eop(rx_ring, rx_desc, skb))
+			continue;
+
+		/* verify the packet layout is correct */
+		if (rnpgbe_cleanup_headers(rx_ring, rx_desc, skb)) {
+			/* we should clean it since we used all info in it */
+			rx_desc->wb.cmd = 0;
+			continue;
+		}
+
+		/* probably a little skewed due to removing CRC */
+		total_rx_bytes += skb->len;
+		/* populate checksum, timestamp, VLAN, and protocol */
+		rnpgbe_process_skb_fields(rx_ring, rx_desc, skb);
+		/* we should clean it since we used all info in it */
+		rx_desc->wb.cmd = 0;
+		napi_gro_receive(&q_vector->napi, skb);
+		/* update budget accounting */
+		total_rx_packets++;
+	}
+
+	u64_stats_update_begin(&rx_ring->syncp);
+	rx_ring->stats.packets += total_rx_packets;
+	rx_ring->stats.bytes += total_rx_bytes;
+	rx_ring->rx_stats.driver_drop_packets += driver_drop_packets;
+	rx_ring->rx_stats.rx_clean_count += total_rx_packets;
+	rx_ring->rx_stats.rx_clean_times++;
+	if (rx_ring->rx_stats.rx_clean_times > 10) {
+		rx_ring->rx_stats.rx_clean_times = 0;
+		rx_ring->rx_stats.rx_clean_count = 0;
+	}
+	u64_stats_update_end(&rx_ring->syncp);
+	q_vector->rx.total_packets += total_rx_packets;
+	q_vector->rx.total_bytes += total_rx_bytes;
+
+	if (total_rx_packets >= budget)
+		rx_ring->rx_stats.poll_again_count++;
+	return fail_alloc ? budget : total_rx_packets;
+}
+
 /**
  * rnpgbe_poll - NAPI Rx polling callback
  * @napi: structure for representing this polling device
@@ -276,11 +694,26 @@ static int rnpgbe_poll(struct napi_struct *napi, int budget)
 		container_of(napi, struct mucse_q_vector, napi);
 	struct mucse *mucse = q_vector->mucse;
 	struct mucse_ring *ring;
-	int work_done = 0;
+	int per_ring_budget, work_done = 0;
 	bool clean_complete = true;
+	int cleaned_total = 0;
 
 	mucse_for_each_ring(ring, q_vector->tx)
 		clean_complete = rnpgbe_clean_tx_irq(q_vector, ring, budget);
+	if (q_vector->rx.count > 1)
+		per_ring_budget = max(budget / q_vector->rx.count, 1);
+	else
+		per_ring_budget = budget;
+
+	mucse_for_each_ring(ring, q_vector->rx) {
+		int cleaned = 0;
+
+		cleaned = rnpgbe_clean_rx_irq(q_vector, ring, per_ring_budget);
+		work_done += cleaned;
+		cleaned_total += cleaned;
+		if (cleaned >= per_ring_budget)
+			clean_complete = false;
+	}
 
 	if (!netif_running(mucse->netdev))
 		clean_complete = true;
@@ -799,6 +1232,30 @@ static void rnpgbe_free_all_tx_resources(struct mucse *mucse)
 		rnpgbe_free_tx_resources(mucse->tx_ring[i]);
 }
 
+static int mucse_alloc_page_pool(struct mucse_ring *rx_ring)
+{
+	int ret = 0;
+
+	struct page_pool_params pp_params = {
+		.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
+		.order = 0,
+		.pool_size = rx_ring->size,
+		.nid = dev_to_node(rx_ring->dev),
+		.dev = rx_ring->dev,
+		.dma_dir = DMA_FROM_DEVICE,
+		.offset = 0,
+		.max_len = PAGE_SIZE,
+	};
+
+	rx_ring->page_pool = page_pool_create(&pp_params);
+	if (IS_ERR(rx_ring->page_pool)) {
+		ret = PTR_ERR(rx_ring->page_pool);
+		rx_ring->page_pool = NULL;
+	}
+
+	return ret;
+}
+
 /**
  * rnpgbe_setup_rx_resources - allocate Rx resources (Descriptors)
  * @rx_ring:    rx descriptor ring (for a specific queue) to setup
@@ -841,6 +1298,7 @@ static int rnpgbe_setup_rx_resources(struct mucse_ring *rx_ring,
 	memset(rx_ring->desc, 0, rx_ring->size);
 	rx_ring->next_to_clean = 0;
 	rx_ring->next_to_use = 0;
+	mucse_alloc_page_pool(rx_ring);
 
 	return 0;
 err:
@@ -870,13 +1328,7 @@ static void rnpgbe_clean_rx_ring(struct mucse_ring *rx_ring)
 					      rx_buffer->page_offset,
 					      mucse_rx_bufsz(rx_ring),
 					      DMA_FROM_DEVICE);
-		/* free resources associated with mapping */
-		dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,
-				     PAGE_SIZE,
-				     DMA_FROM_DEVICE,
-				     M_RX_DMA_ATTR);
-		__page_frag_cache_drain(rx_buffer->page,
-					rx_buffer->pagecnt_bias);
+		page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
 		rx_buffer->page = NULL;
 		i++;
 		rx_buffer++;
@@ -909,6 +1361,10 @@ static void rnpgbe_free_rx_resources(struct mucse_ring *rx_ring)
 	dma_free_coherent(rx_ring->dev, rx_ring->size, rx_ring->desc,
 			  rx_ring->dma);
 	rx_ring->desc = NULL;
+	if (rx_ring->page_pool) {
+		page_pool_destroy(rx_ring->page_pool);
+		rx_ring->page_pool = NULL;
+	}
 }
 
 /**
@@ -1049,44 +1505,103 @@ void rnpgbe_disable_rx_queue(struct mucse_ring *ring)
 	ring_wr32(ring, DMA_RX_START, 0);
 }
 
-#if (PAGE_SIZE < 8192)
-static inline int rnpgbe_compute_pad(int rx_buf_len)
+static bool mucse_alloc_mapped_page(struct mucse_ring *rx_ring,
+				    struct mucse_rx_buffer *bi)
 {
-	int page_size, pad_size;
-
-	page_size = ALIGN(rx_buf_len, PAGE_SIZE / 2);
-	pad_size = SKB_WITH_OVERHEAD(page_size) - rx_buf_len;
+	struct page *page = bi->page;
+	dma_addr_t dma;
 
-	return pad_size;
-}
+	/* since we are recycling buffers we should seldom need to alloc */
+	if (likely(page))
+		return true;
 
-static inline int rnpgbe_sg_size(void)
-{
-	int sg_size = SKB_WITH_OVERHEAD(PAGE_SIZE / 2) - NET_SKB_PAD;
+	page = page_pool_dev_alloc_pages(rx_ring->page_pool);
+	dma = page_pool_get_dma_addr(page);
 
-	sg_size -= NET_IP_ALIGN;
-	sg_size = ALIGN_DOWN(sg_size, 4);
+	bi->dma = dma;
+	bi->page = page;
+	bi->page_offset = RNP_SKB_PAD;
 
-	return sg_size;
+	return true;
 }
 
-#define SG_SIZE  rnpgbe_sg_size()
-static inline int rnpgbe_skb_pad(void)
+static inline void mucse_update_rx_tail(struct mucse_ring *rx_ring,
+					u32 val)
 {
-	int rx_buf_len = SG_SIZE;
-
-	return rnpgbe_compute_pad(rx_buf_len);
+	rx_ring->next_to_use = val;
+	/* update next to alloc since we have filled the ring */
+	rx_ring->next_to_alloc = val;
+	/*
+	 * Force memory writes to complete before letting h/w
+	 * know there are new descriptors to fetch.  (Only
+	 * applicable for weak-ordered memory model archs,
+	 * such as IA-64).
+	 */
+	wmb();
+	m_wr_reg(rx_ring->tail, val);
 }
 
-#define RNP_SKB_PAD rnpgbe_skb_pad()
-static inline unsigned int rnpgbe_rx_offset(void)
+/**
+ * rnpgbe_alloc_rx_buffers - Replace used receive buffers
+ * @rx_ring: ring to place buffers on
+ * @cleaned_count: number of buffers to replace
+ **/
+static bool rnpgbe_alloc_rx_buffers(struct mucse_ring *rx_ring,
+				    u16 cleaned_count)
 {
-	return RNP_SKB_PAD;
-}
+	union rnpgbe_rx_desc *rx_desc;
+	struct mucse_rx_buffer *bi;
+	u16 i = rx_ring->next_to_use;
+	u64 fun_id = ((u64)(rx_ring->pfvfnum) << (32 + 24));
+	bool err = false;
+	u16 bufsz;
+	/* nothing to do */
+	if (!cleaned_count)
+		return err;
 
-#else /* PAGE_SIZE < 8192 */
-#define RNP_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
-#endif
+	rx_desc = M_RX_DESC(rx_ring, i);
+	bi = &rx_ring->rx_buffer_info[i];
+	i -= rx_ring->count;
+	bufsz = mucse_rx_bufsz(rx_ring);
+
+	do {
+		if (!mucse_alloc_mapped_page(rx_ring, bi)) {
+			err = true;
+			break;
+		}
+
+		{
+			u16 *data = page_address(bi->page) + bi->page_offset;
+
+			*data = CHECK_DATA;
+		}
+
+		dma_sync_single_range_for_device(rx_ring->dev, bi->dma,
+						 bi->page_offset, bufsz,
+						 DMA_FROM_DEVICE);
+		rx_desc->pkt_addr =
+			cpu_to_le64(bi->dma + bi->page_offset + fun_id);
+
+		/* clean dd */
+		rx_desc->resv_cmd = 0;
+		rx_desc++;
+		bi++;
+		i++;
+		if (unlikely(!i)) {
+			rx_desc = M_RX_DESC(rx_ring, 0);
+			bi = rx_ring->rx_buffer_info;
+			i -= rx_ring->count;
+		}
+		cleaned_count--;
+	} while (cleaned_count);
+
+	i += rx_ring->count;
+
+	if (rx_ring->next_to_use != i)
+		mucse_update_rx_tail(rx_ring, i);
+
+	return err;
+}
 
 static void rnpgbe_configure_rx_ring(struct mucse *mucse,
 				     struct mucse_ring *ring)
@@ -1126,6 +1641,7 @@ static void rnpgbe_configure_rx_ring(struct mucse *mucse,
 	ring_wr32(ring, DMA_REG_RX_INT_DELAY_TIMER,
 		  mucse->rx_usecs * hw->usecstocount);
 	ring_wr32(ring, DMA_REG_RX_INT_DELAY_PKTCNT, mucse->rx_frames);
+	rnpgbe_alloc_rx_buffers(ring, mucse_desc_unused_rx(ring));
 }
 
 /**
@@ -1151,7 +1667,7 @@ void rnpgbe_configure_rx(struct mucse *mucse)
 
 /**
  * rnpgbe_clean_all_tx_rings - Free Tx Buffers for all queues
- * @adapter: board private structure
+ * @mucse: board private structure
  **/
 void rnpgbe_clean_all_tx_rings(struct mucse *mucse)
 {
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
index 7179e5ebfbf0..5c7e4bd6297f 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.h
@@ -6,6 +6,7 @@
 
 #include "rnpgbe.h"
 
+#define CHECK_DATA (0xabcd)
 #define RING_OFFSET(n) (0x100 * (n))
 #define DMA_DUMY (0xc)
 #define DMA_AXI_EN (0x10)
@@ -106,7 +107,7 @@ static inline void rnpgbe_irq_disable_queues(struct mucse_q_vector *q_vector)
 
 /**
  * rnpgbe_irq_disable - Mask off interrupt generation on the NIC
- * @adapter: board private structure
+ * @mucse: board private structure
  **/
 static inline void rnpgbe_irq_disable(struct mucse *mucse)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 15/15] net: rnpgbe: Add ITR for rx
  2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
                   ` (13 preceding siblings ...)
  2025-07-03  1:48 ` [PATCH 14/15] net: rnpgbe: Add base rx function Dong Yibo
@ 2025-07-03  1:48 ` Dong Yibo
  14 siblings, 0 replies; 30+ messages in thread
From: Dong Yibo @ 2025-07-03  1:48 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck
  Cc: netdev, linux-doc, linux-kernel, dong100

Initialize itr function according to rx packets/bytes

Signed-off-by: Dong Yibo <dong100@mucse.com>
---
 drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h    |  5 +
 .../net/ethernet/mucse/rnpgbe/rnpgbe_lib.c    | 91 ++++++++++++++++++-
 2 files changed, 95 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
index 0b6ba4c3a6cb..8e692da05eb7 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe.h
@@ -551,6 +551,8 @@ struct mucse_ring_container {
 	unsigned int total_packets;
 	u16 work_limit;
 	u16 count;
+	u16 itr;
+	int update_count;
 };
 
 struct mucse_q_vector {
@@ -705,6 +707,9 @@ static inline __le16 rnpgbe_test_staterr(union rnpgbe_rx_desc *rx_desc,
 
 #define M_TRY_LINK_TIMEOUT (4 * HZ)
 
+#define M_LOWEREST_ITR (5)
+#define M_4K_ITR (980)
+
 #define M_RX_BUFFER_WRITE (16)
 #define m_rd_reg(reg) readl((void *)(reg))
 #define m_wr_reg(reg, val) writel((val), (void *)(reg))
diff --git a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
index 05073663ad0e..5d82f063eade 100644
--- a/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
+++ b/drivers/net/ethernet/mucse/rnpgbe/rnpgbe_lib.c
@@ -681,6 +681,62 @@ static int rnpgbe_clean_rx_irq(struct mucse_q_vector *q_vector,
 	return fail_alloc ? budget : total_rx_packets;
 }
 
+static void rnpgbe_update_ring_itr_rx(struct mucse_q_vector *q_vector)
+{
+	int new_val = q_vector->itr_rx;
+	int avg_wire_size = 0;
+	struct mucse *mucse = q_vector->mucse;
+	unsigned int packets;
+
+	switch (mucse->link_speed) {
+	case M_LINK_SPEED_10_FULL:
+	case M_LINK_SPEED_100_FULL:
+		new_val = M_4K_ITR;
+		goto set_itr_val;
+	default:
+		break;
+	}
+
+	packets = q_vector->rx.total_packets;
+	if (packets)
+		avg_wire_size = max_t(u32, avg_wire_size,
+				      q_vector->rx.total_bytes / packets);
+
+	/* if avg_wire_size isn't set no work was done */
+	if (!avg_wire_size)
+		goto clear_counts;
+
+	/* Add 24 bytes to size to account for CRC, preamble, and gap */
+	avg_wire_size += 24;
+
+	/* Don't starve jumbo frames */
+	avg_wire_size = min(avg_wire_size, 3000);
+
+	/* Give a little boost to mid-size frames */
+	if (avg_wire_size > 300 && avg_wire_size < 1200)
+		new_val = avg_wire_size / 3;
+	else
+		new_val = avg_wire_size / 2;
+
+	if (new_val < M_LOWEREST_ITR)
+		new_val = M_LOWEREST_ITR;
+
+set_itr_val:
+	if (q_vector->rx.itr != new_val) {
+		q_vector->rx.update_count++;
+		if (q_vector->rx.update_count >= 2) {
+			q_vector->rx.itr = new_val;
+			q_vector->rx.update_count = 0;
+		}
+	} else {
+		q_vector->rx.update_count = 0;
+	}
+
+clear_counts:
+	q_vector->rx.total_bytes = 0;
+	q_vector->rx.total_packets = 0;
+}
+
 /**
  * rnpgbe_poll - NAPI Rx polling callback
  * @napi: structure for representing this polling device
@@ -725,6 +781,7 @@ static int rnpgbe_poll(struct napi_struct *napi, int budget)
 		return budget;
 	/* all work done, exit the polling mode */
 	if (likely(napi_complete_done(napi, work_done))) {
+		rnpgbe_update_ring_itr_rx(q_vector);
 		if (!test_bit(__MUCSE_DOWN, &mucse->state))
 			rnpgbe_irq_enable_queues(mucse, q_vector);
 	}
@@ -1677,12 +1734,44 @@ void rnpgbe_clean_all_tx_rings(struct mucse *mucse)
 		rnpgbe_clean_tx_ring(mucse->tx_ring[i]);
 }
 
+static void rnpgbe_write_eitr_rx(struct mucse_q_vector *q_vector)
+{
+	struct mucse *mucse = q_vector->mucse;
+	struct mucse_hw *hw = &mucse->hw;
+	u32 new_itr_rx = q_vector->rx.itr;
+	u32 old_itr_rx = q_vector->rx.itr;
+	struct mucse_ring *ring;
+
+	new_itr_rx = new_itr_rx * hw->usecstocount;
+	/* if we are in auto mode write to hw */
+	mucse_for_each_ring(ring, q_vector->rx) {
+		ring_wr32(ring, DMA_REG_RX_INT_DELAY_TIMER, new_itr_rx);
+		if (ring->ring_flags & M_RING_LOWER_ITR) {
+			/* if we are already in this mode skip */
+			if (q_vector->itr_rx == M_LOWEREST_ITR)
+				continue;
+			ring_wr32(ring, DMA_REG_RX_INT_DELAY_PKTCNT, 1);
+			ring_wr32(ring, DMA_REG_RX_INT_DELAY_TIMER,
+				  M_LOWEREST_ITR);
+			q_vector->itr_rx = M_LOWEREST_ITR;
+		} else {
+			if (new_itr_rx == q_vector->itr_rx)
+				continue;
+			ring_wr32(ring, DMA_REG_RX_INT_DELAY_TIMER,
+				  new_itr_rx);
+			ring_wr32(ring, DMA_REG_RX_INT_DELAY_PKTCNT,
+				  mucse->rx_frames);
+			q_vector->itr_rx = old_itr_rx;
+		}
+	}
+}
+
 static irqreturn_t rnpgbe_msix_clean_rings(int irq, void *data)
 {
 	struct mucse_q_vector *q_vector = (struct mucse_q_vector *)data;
 
 	rnpgbe_irq_disable_queues(q_vector);
-
+	rnpgbe_write_eitr_rx(q_vector);
 	if (q_vector->rx.ring || q_vector->tx.ring)
 		napi_schedule_irqoff(&q_vector->napi);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 01/15] net: rnpgbe: Add build support for rnpgbe
  2025-07-03  1:48 ` [PATCH 01/15] net: rnpgbe: Add build support for rnpgbe Dong Yibo
@ 2025-07-03 16:25   ` Andrew Lunn
  2025-07-04  2:10     ` Yibo Dong
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Lunn @ 2025-07-03 16:25 UTC (permalink / raw)
  To: Dong Yibo
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -16001,11 +16001,7 @@ F:	tools/testing/vma/
>  
>  MEMORY MAPPING - LOCKING
>  M:	Andrew Morton <akpm@linux-foundation.org>
> -M:	Suren Baghdasaryan <surenb@google.com>
> -M:	Liam R. Howlett <Liam.Howlett@oracle.com>
> -M:	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> -R:	Vlastimil Babka <vbabka@suse.cz>
> -R:	Shakeel Butt <shakeel.butt@linux.dev>
> +M:	Suren Baghdasaryan <surenb@google.com> M:	Liam R. Howlett <Liam.Howlett@oracle.com> M:	Lorenzo Stoakes <lorenzo.stoakes@oracle.com> R:	Vlastimil Babka <vbabka@suse.cz> R:	Shakeel Butt <shakeel.butt@linux.dev>

You clearly have not reviewed your own patch, or you would not be
changing this section of the MAINTAINERs file.

> +if NET_VENDOR_MUCSE
> +
> +config MGBE
> +	tristate "Mucse(R) 1GbE PCI Express adapters support"
> +        depends on PCI
> +	select PAGE_POOL
> +        help
> +          This driver supports Mucse(R) 1GbE PCI Express family of
> +          adapters.
> +
> +	  More specific information on configuring the driver is in
> +	  <file:Documentation/networking/device_drivers/ethernet/mucse/rnpgbe.rst>.
> +
> +          To compile this driver as a module, choose M here. The module
> +          will be called rnpgbe.

There is some odd indentation here.

> +#include <linux/string.h>
> +#include <linux/etherdevice.h>
> +
> +#include "rnpgbe.h"
> +
> +char rnpgbe_driver_name[] = "rnpgbe";
> +static const char rnpgbe_driver_string[] =
> +	"mucse 1 Gigabit PCI Express Network Driver";
> +#define DRV_VERSION "1.0.0"
> +const char rnpgbe_driver_version[] = DRV_VERSION;

Driver versions are pointless, since they never change, yet the kernel
around the driver changes all the time. Please drop.

> +static const char rnpgbe_copyright[] =
> +	"Copyright (c) 2020-2025 mucse Corporation.";

Why do you need this as a string?

> +static int rnpgbe_add_adpater(struct pci_dev *pdev)
> +{
> +	struct mucse *mucse = NULL;
> +	struct net_device *netdev;
> +	static int bd_number;
> +
> +	pr_info("====  add rnpgbe queues:%d ====", RNPGBE_MAX_QUEUES);

If you are still debugging this driver, please wait until it is mostly
bug free before submitting. I would not expect a production quality
driver to have prints like this.

> +	netdev = alloc_etherdev_mq(sizeof(struct mucse), RNPGBE_MAX_QUEUES);
> +	if (!netdev)
> +		return -ENOMEM;
> +
> +	mucse = netdev_priv(netdev);
> +	memset((char *)mucse, 0x00, sizeof(struct mucse));

priv is guaranteed to be zero'ed.

> +static void rnpgbe_shutdown(struct pci_dev *pdev)
> +{
> +	bool wake = false;
> +
> +	__rnpgbe_shutdown(pdev, &wake);

Please avoid using __ function names. Those are supposed to be
reserved for the compiler. Sometimes you will see single _ for
functions which have an unlocked version and a locked version.

> +static int __init rnpgbe_init_module(void)
> +{
> +	int ret;
> +
> +	pr_info("%s - version %s\n", rnpgbe_driver_string,
> +		rnpgbe_driver_version);
> +	pr_info("%s\n", rnpgbe_copyright);

Please don't spam the log. Only print something on error.

	Andrew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 01/15] net: rnpgbe: Add build support for rnpgbe
  2025-07-03 16:25   ` Andrew Lunn
@ 2025-07-04  2:10     ` Yibo Dong
  0 siblings, 0 replies; 30+ messages in thread
From: Yibo Dong @ 2025-07-04  2:10 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Thu, Jul 03, 2025 at 06:25:21PM +0200, Andrew Lunn wrote:
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -16001,11 +16001,7 @@ F:	tools/testing/vma/
> >  
> >  MEMORY MAPPING - LOCKING
> >  M:	Andrew Morton <akpm@linux-foundation.org>
> > -M:	Suren Baghdasaryan <surenb@google.com>
> > -M:	Liam R. Howlett <Liam.Howlett@oracle.com>
> > -M:	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > -R:	Vlastimil Babka <vbabka@suse.cz>
> > -R:	Shakeel Butt <shakeel.butt@linux.dev>
> > +M:	Suren Baghdasaryan <surenb@google.com> M:	Liam R. Howlett <Liam.Howlett@oracle.com> M:	Lorenzo Stoakes <lorenzo.stoakes@oracle.com> R:	Vlastimil Babka <vbabka@suse.cz> R:	Shakeel Butt <shakeel.butt@linux.dev>
> 
> You clearly have not reviewed your own patch, or you would not be
> changing this section of the MAINTAINERs file.
> 
Sorry, I didn't review it carefully. I will correct this error and
review all the remaining patches.
> > +if NET_VENDOR_MUCSE
> > +
> > +config MGBE
> > +	tristate "Mucse(R) 1GbE PCI Express adapters support"
> > +        depends on PCI
> > +	select PAGE_POOL
> > +        help
> > +          This driver supports Mucse(R) 1GbE PCI Express family of
> > +          adapters.
> > +
> > +	  More specific information on configuring the driver is in
> > +	  <file:Documentation/networking/device_drivers/ethernet/mucse/rnpgbe.rst>.
> > +
> > +          To compile this driver as a module, choose M here. The module
> > +          will be called rnpgbe.
> 
> There is some odd indentation here.
> 
I will correct this in v1.
> > +#include <linux/string.h>
> > +#include <linux/etherdevice.h>
> > +
> > +#include "rnpgbe.h"
> > +
> > +char rnpgbe_driver_name[] = "rnpgbe";
> > +static const char rnpgbe_driver_string[] =
> > +	"mucse 1 Gigabit PCI Express Network Driver";
> > +#define DRV_VERSION "1.0.0"
> > +const char rnpgbe_driver_version[] = DRV_VERSION;
> 
> Driver versions are pointless, since they never change, yet the kernel
> around the driver changes all the time. Please drop.
> 
OK, I got it.
> > +static const char rnpgbe_copyright[] =
> > +	"Copyright (c) 2020-2025 mucse Corporation.";
> 
> Why do you need this as a string?
> 
I printed this in 'pr_info' before. Of course, I should remove this belong
with 'pr_info'.
> > +static int rnpgbe_add_adpater(struct pci_dev *pdev)
> > +{
> > +	struct mucse *mucse = NULL;
> > +	struct net_device *netdev;
> > +	static int bd_number;
> > +
> > +	pr_info("====  add rnpgbe queues:%d ====", RNPGBE_MAX_QUEUES);
> 
> If you are still debugging this driver, please wait until it is mostly
> bug free before submitting. I would not expect a production quality
> driver to have prints like this.
> 
Got it, I will remove 'pr_info'.
> > +	netdev = alloc_etherdev_mq(sizeof(struct mucse), RNPGBE_MAX_QUEUES);
> > +	if (!netdev)
> > +		return -ENOMEM;
> > +
> > +	mucse = netdev_priv(netdev);
> > +	memset((char *)mucse, 0x00, sizeof(struct mucse));
> 
> priv is guaranteed to be zero'ed.
> 
I will remove 'memset' here.
> > +static void rnpgbe_shutdown(struct pci_dev *pdev)
> > +{
> > +	bool wake = false;
> > +
> > +	__rnpgbe_shutdown(pdev, &wake);
> 
> Please avoid using __ function names. Those are supposed to be
> reserved for the compiler. Sometimes you will see single _ for
> functions which have an unlocked version and a locked version.
> 
Got it, I will fix this.
> > +static int __init rnpgbe_init_module(void)
> > +{
> > +	int ret;
> > +
> > +	pr_info("%s - version %s\n", rnpgbe_driver_string,
> > +		rnpgbe_driver_version);
> > +	pr_info("%s\n", rnpgbe_copyright);
> 
> Please don't spam the log. Only print something on error.
> 
> 	Andrew
> 
I will remove the log, thanks for your feedback.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/15] net: rnpgbe: Add n500/n210 chip support
  2025-07-03  1:48 ` [PATCH 02/15] net: rnpgbe: Add n500/n210 chip support Dong Yibo
@ 2025-07-04 18:03   ` Andrew Lunn
  2025-07-07  1:32     ` Yibo Dong
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Lunn @ 2025-07-04 18:03 UTC (permalink / raw)
  To: Dong Yibo
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

> +#define M_NET_FEATURE_SG ((u32)(1 << 0))
> +#define M_NET_FEATURE_TX_CHECKSUM ((u32)(1 << 1))
> +#define M_NET_FEATURE_RX_CHECKSUM ((u32)(1 << 2))

Please use the BIT() macro.

> +	u32 feature_flags;
> +	u16 usecstocount;
> +};
> +

> +#define rnpgbe_rd_reg(reg) readl((void *)(reg))
> +#define rnpgbe_wr_reg(reg, val) writel((val), (void *)(reg))

These casts look wrong. You should be getting your basic iomem pointer
from a function which returns an void __iomem* pointer, so the cast
should not be needed.

> -static int rnpgbe_add_adpater(struct pci_dev *pdev)
> +static int rnpgbe_add_adpater(struct pci_dev *pdev,
> +			      const struct rnpgbe_info *ii)
>  {
> +	int err = 0;
>  	struct mucse *mucse = NULL;
>  	struct net_device *netdev;
> +	struct mucse_hw *hw = NULL;
> +	u8 __iomem *hw_addr = NULL;
> +	u32 dma_version = 0;
>  	static int bd_number;
> +	u32 queues = ii->total_queue_pair_cnts;

You need to work on your reverse Christmas tree. Local variables
should be ordered longest to shortest.

       Andrew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 03/15] net: rnpgbe: Add basic mbx ops support
  2025-07-03  1:48 ` [PATCH 03/15] net: rnpgbe: Add basic mbx ops support Dong Yibo
@ 2025-07-04 18:13   ` Andrew Lunn
  2025-07-07  6:39     ` Yibo Dong
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Lunn @ 2025-07-04 18:13 UTC (permalink / raw)
  To: Dong Yibo
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

>  #define MBX_FEATURE_WRITE_DELAY BIT(1)
>  	u32 mbx_feature;
>  	/* cm3 <-> pf mbx */
> -	u32 cpu_pf_shm_base;
> -	u32 pf2cpu_mbox_ctrl;
> -	u32 pf2cpu_mbox_mask;
> -	u32 cpu_pf_mbox_mask;
> -	u32 cpu2pf_mbox_vec;
> +	u32 fw_pf_shm_base;
> +	u32 pf2fw_mbox_ctrl;
> +	u32 pf2fw_mbox_mask;
> +	u32 fw_pf_mbox_mask;
> +	u32 fw2pf_mbox_vec;

Why is a patch adding a new feature deleting code?

> +/**
> + * mucse_read_mbx - Reads a message from the mailbox
> + * @hw: Pointer to the HW structure
> + * @msg: The message buffer
> + * @size: Length of buffer
> + * @mbx_id: Id of vf/fw to read
> + *
> + * returns 0 if it successfully read message or else
> + * MUCSE_ERR_MBX.
> + **/
> +s32 mucse_read_mbx(struct mucse_hw *hw, u32 *msg, u16 size,

s32 is an unusual type for linux. Can the mbox actually return
negative amounts of data?

> +/**
> + * mucse_write_mbx - Write a message to the mailbox
> + * @hw: Pointer to the HW structure
> + * @msg: The message buffer
> + * @size: Length of buffer
> + * @mbx_id: Id of vf/fw to write
> + *
> + * returns 0 if it successfully write message or else
> + * MUCSE_ERR_MBX.

Don't invent new error codes. EINVAL would do.

> + **/
> +s32 mucse_write_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
> +		    enum MBX_ID mbx_id)
> +{
> +	struct mucse_mbx_info *mbx = &hw->mbx;
> +	s32 ret_val = 0;
> +
> +	if (size > mbx->size)
> +		ret_val = MUCSE_ERR_MBX;
> +	else if (mbx->ops.write)
> +		ret_val = mbx->ops.write(hw, msg, size, mbx_id);
> +
> +	return ret_val;
> +}
> +static inline void mucse_mbx_inc_pf_ack(struct mucse_hw *hw,
> +					enum MBX_ID mbx_id)

No inline functions in C files. Let the compiler decide.

> +static s32 mucse_poll_for_msg(struct mucse_hw *hw, enum MBX_ID mbx_id)
> +{
> +	struct mucse_mbx_info *mbx = &hw->mbx;
> +	int countdown = mbx->timeout;
> +
> +	if (!countdown || !mbx->ops.check_for_msg)
> +		goto out;
> +
> +	while (countdown && mbx->ops.check_for_msg(hw, mbx_id)) {
> +		countdown--;
> +		if (!countdown)
> +			break;
> +		udelay(mbx->usec_delay);
> +	}
> +out:
> +	return countdown ? 0 : -ETIME;

ETIMEDOUT, not ETIME. Please use iopoll.h, not roll your own.

    Andrew

---
pw-bot: cr

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw ops support
  2025-07-03  1:48 ` [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw " Dong Yibo
@ 2025-07-04 18:25   ` Andrew Lunn
  2025-07-07  7:37     ` Yibo Dong
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Lunn @ 2025-07-04 18:25 UTC (permalink / raw)
  To: Dong Yibo
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

> +/**
> + * mucse_fw_send_cmd_wait - Send cmd req and wait for response
> + * @hw: Pointer to the HW structure
> + * @req: Pointer to the cmd req structure
> + * @reply: Pointer to the fw reply structure
> + *
> + * mucse_fw_send_cmd_wait sends req to pf-cm3 mailbox and wait
> + * reply from fw.
> + *
> + * Returns 0 on success, negative on failure
> + **/
> +static int mucse_fw_send_cmd_wait(struct mucse_hw *hw,
> +				  struct mbx_fw_cmd_req *req,
> +				  struct mbx_fw_cmd_reply *reply)
> +{
> +	int err;
> +	int retry_cnt = 3;
> +
> +	if (!hw || !req || !reply || !hw->mbx.ops.read_posted)

Can this happen?

If this is not supposed to happen, it is better the driver opps, so
you get a stack trace and find where the driver is broken.

> +		return -EINVAL;
> +
> +	/* if pcie off, nothing todo */
> +	if (pci_channel_offline(hw->pdev))
> +		return -EIO;

What can cause it to go offline? Is this to do with PCIe hotplug?

> +
> +	if (mutex_lock_interruptible(&hw->mbx.lock))
> +		return -EAGAIN;

mutex_lock_interruptable() returns -EINTR, which is what you should
return, not -EAGAIN.

> +
> +	err = hw->mbx.ops.write_posted(hw, (u32 *)req,
> +				       L_WD(req->datalen + MBX_REQ_HDR_LEN),
> +				       MBX_FW);
> +	if (err) {
> +		mutex_unlock(&hw->mbx.lock);
> +		return err;
> +	}
> +
> +retry:
> +	retry_cnt--;
> +	if (retry_cnt < 0)
> +		return -EIO;
> +
> +	err = hw->mbx.ops.read_posted(hw, (u32 *)reply,
> +				      L_WD(sizeof(*reply)),
> +				      MBX_FW);
> +	if (err) {
> +		mutex_unlock(&hw->mbx.lock);
> +		return err;
> +	}
> +
> +	if (reply->opcode != req->opcode)
> +		goto retry;
> +
> +	mutex_unlock(&hw->mbx.lock);
> +
> +	if (reply->error_code)
> +		return -reply->error_code;

The mbox is using linux error codes? 

> +#define FLAGS_DD BIT(0) /* driver clear 0, FW must set 1 */
> +/* driver clear 0, FW must set only if it reporting an error */
> +#define FLAGS_ERR BIT(2)
> +
> +/* req is little endian. bigendian should be conserened */
> +struct mbx_fw_cmd_req {
> +	u16 flags; /* 0-1 */
> +	u16 opcode; /* 2-3 enum GENERIC_CMD */
> +	u16 datalen; /* 4-5 */
> +	u16 ret_value; /* 6-7 */

If this is little endian, please use __le16, __le32 etc, so that the
static analysers will tell you if you are missing cpu_to_le32 etc.

	Andrew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 05/15] net: rnpgbe: Add download firmware for n210 chip
  2025-07-03  1:48 ` [PATCH 05/15] net: rnpgbe: Add download firmware for n210 chip Dong Yibo
@ 2025-07-04 18:33   ` Andrew Lunn
  2025-07-07  8:14     ` Yibo Dong
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Lunn @ 2025-07-04 18:33 UTC (permalink / raw)
  To: Dong Yibo
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

>  static int init_firmware_for_n210(struct mucse_hw *hw)
>  {
> -	return 0;
> +	char *filename = "n210_driver_update.bin";
> +	const struct firmware *fw;
> +	struct pci_dev *pdev = hw->pdev;
> +	int rc = 0;
> +	int err = 0;
> +	struct mucse *mucse = (struct mucse *)hw->back;
> +
> +	rc = request_firmware(&fw, filename, &pdev->dev);
> +
> +	if (rc != 0) {
> +		dev_err(&pdev->dev, "requesting firmware file failed\n");
> +		return rc;
> +	}
> +
> +	if (rnpgbe_check_fw_from_flash(hw, fw->data)) {
> +		dev_info(&pdev->dev, "firmware type error\n");

Why dev_info()? If this is an error then you should use dev_err().

> +	dev_info(&pdev->dev, "init firmware successfully.");
> +	dev_info(&pdev->dev, "Please reboot.");

Don't spam the lock with status messages.

Reboot? Humm, maybe this should be devlink flash command.

request_firmware() is normally used for download into SRAM which is
then used immediately. If you need to reboot the machine, devlink is
more appropriate.

> +static inline void mucse_sfc_command(u8 __iomem *hw_addr, u32 cmd)
> +{
> +	iowrite32(cmd, (hw_addr + 0x8));
> +	iowrite32(1, (hw_addr + 0x0));
> +	while (ioread32(hw_addr) != 0)
> +		;


Never do endless loops waiting for hardware. It might never give what
you want, and there is no escape.

> +static int32_t mucse_sfc_flash_wait_idle(u8 __iomem *hw_addr)
> +{
> +	int time = 0;
> +	int ret = HAL_OK;
> +
> +	iowrite32(CMD_CYCLE(8), (hw_addr + 0x10));
> +	iowrite32(RD_DATA_CYCLE(8), (hw_addr + 0x14));
> +
> +	while (1) {
> +		mucse_sfc_command(hw_addr, CMD_READ_STATUS);
> +		if ((ioread32(hw_addr + 0x4) & 0x1) == 0)
> +			break;
> +		time++;
> +		if (time > 1000)
> +			ret = HAL_FAIL;
> +	}

iopoll.h 

> +static int mucse_sfc_flash_erase_sector(u8 __iomem *hw_addr,
> +					u32 address)
> +{
> +	int ret = HAL_OK;
> +
> +	if (address >= RSP_FLASH_HIGH_16M_OFFSET)
> +		return HAL_EINVAL;

Use linux error codes, EINVAL.

> +
> +	if (address % 4096)
> +		return HAL_EINVAL;

EINVAL

> +
> +	mucse_sfc_flash_write_enable(hw_addr);
> +
> +	iowrite32((CMD_CYCLE(8) | ADDR_CYCLE(24)), (hw_addr + 0x10));
> +	iowrite32((RD_DATA_CYCLE(0) | WR_DATA_CYCLE(0)), (hw_addr + 0x14));
> +	iowrite32(SFCADDR(address), (hw_addr + 0xc));
> +	mucse_sfc_command(hw_addr, CMD_SECTOR_ERASE);
> +	if (mucse_sfc_flash_wait_idle(hw_addr)) {
> +		ret = HAL_FAIL;
> +		goto failed;

mucse_sfc_flash_wait_idle() should return -ETIMEDOUT, so return that.

	Andrew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/15] net: rnpgbe: Add n500/n210 chip support
  2025-07-04 18:03   ` Andrew Lunn
@ 2025-07-07  1:32     ` Yibo Dong
  0 siblings, 0 replies; 30+ messages in thread
From: Yibo Dong @ 2025-07-07  1:32 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Fri, Jul 04, 2025 at 08:03:47PM +0200, Andrew Lunn wrote:
> > +#define M_NET_FEATURE_SG ((u32)(1 << 0))
> > +#define M_NET_FEATURE_TX_CHECKSUM ((u32)(1 << 1))
> > +#define M_NET_FEATURE_RX_CHECKSUM ((u32)(1 << 2))
> 
> Please use the BIT() macro.
> 
Got it, I will fix this.
> > +	u32 feature_flags;
> > +	u16 usecstocount;
> > +};
> > +
> 
> > +#define rnpgbe_rd_reg(reg) readl((void *)(reg))
> > +#define rnpgbe_wr_reg(reg, val) writel((val), (void *)(reg))
> 
> These casts look wrong. You should be getting your basic iomem pointer
> from a function which returns an void __iomem* pointer, so the cast
> should not be needed.
> 
Yes, I also get failed from 'patch status' website, I should remove
'void *' here.
> > -static int rnpgbe_add_adpater(struct pci_dev *pdev)
> > +static int rnpgbe_add_adpater(struct pci_dev *pdev,
> > +			      const struct rnpgbe_info *ii)
> >  {
> > +	int err = 0;
> >  	struct mucse *mucse = NULL;
> >  	struct net_device *netdev;
> > +	struct mucse_hw *hw = NULL;
> > +	u8 __iomem *hw_addr = NULL;
> > +	u32 dma_version = 0;
> >  	static int bd_number;
> > +	u32 queues = ii->total_queue_pair_cnts;
> 
> You need to work on your reverse Christmas tree. Local variables
> should be ordered longest to shortest.
> 
>        Andrew
> 
Got it, I will fix it, and try to check other patches.
Thanks for your feedback.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 03/15] net: rnpgbe: Add basic mbx ops support
  2025-07-04 18:13   ` Andrew Lunn
@ 2025-07-07  6:39     ` Yibo Dong
  2025-07-07 12:00       ` Andrew Lunn
  0 siblings, 1 reply; 30+ messages in thread
From: Yibo Dong @ 2025-07-07  6:39 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Fri, Jul 04, 2025 at 08:13:19PM +0200, Andrew Lunn wrote:
> >  #define MBX_FEATURE_WRITE_DELAY BIT(1)
> >  	u32 mbx_feature;
> >  	/* cm3 <-> pf mbx */
> > -	u32 cpu_pf_shm_base;
> > -	u32 pf2cpu_mbox_ctrl;
> > -	u32 pf2cpu_mbox_mask;
> > -	u32 cpu_pf_mbox_mask;
> > -	u32 cpu2pf_mbox_vec;
> > +	u32 fw_pf_shm_base;
> > +	u32 pf2fw_mbox_ctrl;
> > +	u32 pf2fw_mbox_mask;
> > +	u32 fw_pf_mbox_mask;
> > +	u32 fw2pf_mbox_vec;
> 
> Why is a patch adding a new feature deleting code?
> 
Not delete code, 'cpu' here means controller in the chip, not host.
So, I just rename 'cpu' to 'fw' to avoid confusion.
> > +/**
> > + * mucse_read_mbx - Reads a message from the mailbox
> > + * @hw: Pointer to the HW structure
> > + * @msg: The message buffer
> > + * @size: Length of buffer
> > + * @mbx_id: Id of vf/fw to read
> > + *
> > + * returns 0 if it successfully read message or else
> > + * MUCSE_ERR_MBX.
> > + **/
> > +s32 mucse_read_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
> 
> s32 is an unusual type for linux. Can the mbox actually return
> negative amounts of data?
> 
No, it cann't return negative amounts of data, but this function
returns negative when it failed. Maybe I should use 'int'?
> > +/**
> > + * mucse_write_mbx - Write a message to the mailbox
> > + * @hw: Pointer to the HW structure
> > + * @msg: The message buffer
> > + * @size: Length of buffer
> > + * @mbx_id: Id of vf/fw to write
> > + *
> > + * returns 0 if it successfully write message or else
> > + * MUCSE_ERR_MBX.
> 
> Don't invent new error codes. EINVAL would do.
> 
Got it, I will fix this.
> > + **/
> > +s32 mucse_write_mbx(struct mucse_hw *hw, u32 *msg, u16 size,
> > +		    enum MBX_ID mbx_id)
> > +{
> > +	struct mucse_mbx_info *mbx = &hw->mbx;
> > +	s32 ret_val = 0;
> > +
> > +	if (size > mbx->size)
> > +		ret_val = MUCSE_ERR_MBX;
> > +	else if (mbx->ops.write)
> > +		ret_val = mbx->ops.write(hw, msg, size, mbx_id);
> > +
> > +	return ret_val;
> > +}
> > +static inline void mucse_mbx_inc_pf_ack(struct mucse_hw *hw,
> > +					enum MBX_ID mbx_id)
> 
> No inline functions in C files. Let the compiler decide.
> 
Got it, I will move it to the h file.
> > +static s32 mucse_poll_for_msg(struct mucse_hw *hw, enum MBX_ID mbx_id)
> > +{
> > +	struct mucse_mbx_info *mbx = &hw->mbx;
> > +	int countdown = mbx->timeout;
> > +
> > +	if (!countdown || !mbx->ops.check_for_msg)
> > +		goto out;
> > +
> > +	while (countdown && mbx->ops.check_for_msg(hw, mbx_id)) {
> > +		countdown--;
> > +		if (!countdown)
> > +			break;
> > +		udelay(mbx->usec_delay);
> > +	}
> > +out:
> > +	return countdown ? 0 : -ETIME;
> 
> ETIMEDOUT, not ETIME. Please use iopoll.h, not roll your own.
> 
>     Andrew
> 
> ---
> pw-bot: cr
> 
Got it, I will fix it.
Thanks for your feedback.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw ops support
  2025-07-04 18:25   ` Andrew Lunn
@ 2025-07-07  7:37     ` Yibo Dong
  2025-07-07 12:09       ` Andrew Lunn
  0 siblings, 1 reply; 30+ messages in thread
From: Yibo Dong @ 2025-07-07  7:37 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Fri, Jul 04, 2025 at 08:25:12PM +0200, Andrew Lunn wrote:
> > +/**
> > + * mucse_fw_send_cmd_wait - Send cmd req and wait for response
> > + * @hw: Pointer to the HW structure
> > + * @req: Pointer to the cmd req structure
> > + * @reply: Pointer to the fw reply structure
> > + *
> > + * mucse_fw_send_cmd_wait sends req to pf-cm3 mailbox and wait
> > + * reply from fw.
> > + *
> > + * Returns 0 on success, negative on failure
> > + **/
> > +static int mucse_fw_send_cmd_wait(struct mucse_hw *hw,
> > +				  struct mbx_fw_cmd_req *req,
> > +				  struct mbx_fw_cmd_reply *reply)
> > +{
> > +	int err;
> > +	int retry_cnt = 3;
> > +
> > +	if (!hw || !req || !reply || !hw->mbx.ops.read_posted)
> 
> Can this happen?
> 
> If this is not supposed to happen, it is better the driver opps, so
> you get a stack trace and find where the driver is broken.
> 
Yes, it is not supposed to happen. So, you means I should remove this
check in order to get opps when this condition happen?
> > +		return -EINVAL;
> > +
> > +	/* if pcie off, nothing todo */
> > +	if (pci_channel_offline(hw->pdev))
> > +		return -EIO;
> 
> What can cause it to go offline? Is this to do with PCIe hotplug?
> 
Yes, I try to get a PCIe hotplug condition by 'pci_channel_offline'.
If that happens, driver should never do bar-read/bar-write, so return
here.
> > +
> > +	if (mutex_lock_interruptible(&hw->mbx.lock))
> > +		return -EAGAIN;
> 
> mutex_lock_interruptable() returns -EINTR, which is what you should
> return, not -EAGAIN.
> 
Got it, I should return '-EINTR' here.
> > +
> > +	err = hw->mbx.ops.write_posted(hw, (u32 *)req,
> > +				       L_WD(req->datalen + MBX_REQ_HDR_LEN),
> > +				       MBX_FW);
> > +	if (err) {
> > +		mutex_unlock(&hw->mbx.lock);
> > +		return err;
> > +	}
> > +
> > +retry:
> > +	retry_cnt--;
> > +	if (retry_cnt < 0)
> > +		return -EIO;
> > +
> > +	err = hw->mbx.ops.read_posted(hw, (u32 *)reply,
> > +				      L_WD(sizeof(*reply)),
> > +				      MBX_FW);
> > +	if (err) {
> > +		mutex_unlock(&hw->mbx.lock);
> > +		return err;
> > +	}
> > +
> > +	if (reply->opcode != req->opcode)
> > +		goto retry;
> > +
> > +	mutex_unlock(&hw->mbx.lock);
> > +
> > +	if (reply->error_code)
> > +		return -reply->error_code;
> 
> The mbox is using linux error codes? 
> 
It is used only between driver and fw, yay be just samply like this: 
0     -- no error
not 0 -- error
So, it is not using linux error codes.
> > +#define FLAGS_DD BIT(0) /* driver clear 0, FW must set 1 */
> > +/* driver clear 0, FW must set only if it reporting an error */
> > +#define FLAGS_ERR BIT(2)
> > +
> > +/* req is little endian. bigendian should be conserened */
> > +struct mbx_fw_cmd_req {
> > +	u16 flags; /* 0-1 */
> > +	u16 opcode; /* 2-3 enum GENERIC_CMD */
> > +	u16 datalen; /* 4-5 */
> > +	u16 ret_value; /* 6-7 */
> 
> If this is little endian, please use __le16, __le32 etc, so that the
> static analysers will tell you if you are missing cpu_to_le32 etc.
> 
> 	Andrew
> 
Got it, I will fix it.

Thanks for your feedback.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 05/15] net: rnpgbe: Add download firmware for n210 chip
  2025-07-04 18:33   ` Andrew Lunn
@ 2025-07-07  8:14     ` Yibo Dong
  0 siblings, 0 replies; 30+ messages in thread
From: Yibo Dong @ 2025-07-07  8:14 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Fri, Jul 04, 2025 at 08:33:14PM +0200, Andrew Lunn wrote:
> >  static int init_firmware_for_n210(struct mucse_hw *hw)
> >  {
> > -	return 0;
> > +	char *filename = "n210_driver_update.bin";
> > +	const struct firmware *fw;
> > +	struct pci_dev *pdev = hw->pdev;
> > +	int rc = 0;
> > +	int err = 0;
> > +	struct mucse *mucse = (struct mucse *)hw->back;
> > +
> > +	rc = request_firmware(&fw, filename, &pdev->dev);
> > +
> > +	if (rc != 0) {
> > +		dev_err(&pdev->dev, "requesting firmware file failed\n");
> > +		return rc;
> > +	}
> > +
> > +	if (rnpgbe_check_fw_from_flash(hw, fw->data)) {
> > +		dev_info(&pdev->dev, "firmware type error\n");
> 
> Why dev_info()? If this is an error then you should use dev_err().
> 
Yes, it should be dev_err().
> > +	dev_info(&pdev->dev, "init firmware successfully.");
> > +	dev_info(&pdev->dev, "Please reboot.");
> 
> Don't spam the lock with status messages.
> 
> Reboot? Humm, maybe this should be devlink flash command.
> 
> request_firmware() is normally used for download into SRAM which is
> then used immediately. If you need to reboot the machine, devlink is
> more appropriate.
> 
Yes, this is used to download flash to the chip, and then reboot to run.
I will change it to devlink.
> > +static inline void mucse_sfc_command(u8 __iomem *hw_addr, u32 cmd)
> > +{
> > +	iowrite32(cmd, (hw_addr + 0x8));
> > +	iowrite32(1, (hw_addr + 0x0));
> > +	while (ioread32(hw_addr) != 0)
> > +		;
> 
> 
> Never do endless loops waiting for hardware. It might never give what
> you want, and there is no escape.
> 
Got it, I will update this.
> > +static int32_t mucse_sfc_flash_wait_idle(u8 __iomem *hw_addr)
> > +{
> > +	int time = 0;
> > +	int ret = HAL_OK;
> > +
> > +	iowrite32(CMD_CYCLE(8), (hw_addr + 0x10));
> > +	iowrite32(RD_DATA_CYCLE(8), (hw_addr + 0x14));
> > +
> > +	while (1) {
> > +		mucse_sfc_command(hw_addr, CMD_READ_STATUS);
> > +		if ((ioread32(hw_addr + 0x4) & 0x1) == 0)
> > +			break;
> > +		time++;
> > +		if (time > 1000)
> > +			ret = HAL_FAIL;
> > +	}
> 
> iopoll.h 
> 
Got it, I will use method in iopoll.h.
> > +static int mucse_sfc_flash_erase_sector(u8 __iomem *hw_addr,
> > +					u32 address)
> > +{
> > +	int ret = HAL_OK;
> > +
> > +	if (address >= RSP_FLASH_HIGH_16M_OFFSET)
> > +		return HAL_EINVAL;
> 
> Use linux error codes, EINVAL.
> 
Got it.
> > +
> > +	if (address % 4096)
> > +		return HAL_EINVAL;
> 
> EINVAL
> 
Got it.
> > +
> > +	mucse_sfc_flash_write_enable(hw_addr);
> > +
> > +	iowrite32((CMD_CYCLE(8) | ADDR_CYCLE(24)), (hw_addr + 0x10));
> > +	iowrite32((RD_DATA_CYCLE(0) | WR_DATA_CYCLE(0)), (hw_addr + 0x14));
> > +	iowrite32(SFCADDR(address), (hw_addr + 0xc));
> > +	mucse_sfc_command(hw_addr, CMD_SECTOR_ERASE);
> > +	if (mucse_sfc_flash_wait_idle(hw_addr)) {
> > +		ret = HAL_FAIL;
> > +		goto failed;
> 
> mucse_sfc_flash_wait_idle() should return -ETIMEDOUT, so return that.
> 
> 	Andrew
> 
Got it, I will return -ETIMEDOUT.
Thanks for your feedback.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 03/15] net: rnpgbe: Add basic mbx ops support
  2025-07-07  6:39     ` Yibo Dong
@ 2025-07-07 12:00       ` Andrew Lunn
  2025-07-08  1:44         ` Yibo Dong
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Lunn @ 2025-07-07 12:00 UTC (permalink / raw)
  To: Yibo Dong
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Mon, Jul 07, 2025 at 02:39:55PM +0800, Yibo Dong wrote:
> On Fri, Jul 04, 2025 at 08:13:19PM +0200, Andrew Lunn wrote:
> > >  #define MBX_FEATURE_WRITE_DELAY BIT(1)
> > >  	u32 mbx_feature;
> > >  	/* cm3 <-> pf mbx */
> > > -	u32 cpu_pf_shm_base;
> > > -	u32 pf2cpu_mbox_ctrl;
> > > -	u32 pf2cpu_mbox_mask;
> > > -	u32 cpu_pf_mbox_mask;
> > > -	u32 cpu2pf_mbox_vec;
> > > +	u32 fw_pf_shm_base;
> > > +	u32 pf2fw_mbox_ctrl;
> > > +	u32 pf2fw_mbox_mask;
> > > +	u32 fw_pf_mbox_mask;
> > > +	u32 fw2pf_mbox_vec;
> > 
> > Why is a patch adding a new feature deleting code?
> > 
> Not delete code, 'cpu' here means controller in the chip, not host.
> So, I just rename 'cpu' to 'fw' to avoid confusion.

So, so let me rephrase my point. Why was it not called fw_foo right
from the beginning? You are making the code harder to review by doing
stuff like this. And your code is going to need a lot of review and
revisions because its quality if low if you ask me.

	Andrew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw ops support
  2025-07-07  7:37     ` Yibo Dong
@ 2025-07-07 12:09       ` Andrew Lunn
  2025-07-08  2:01         ` Yibo Dong
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Lunn @ 2025-07-07 12:09 UTC (permalink / raw)
  To: Yibo Dong
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Mon, Jul 07, 2025 at 03:37:43PM +0800, Yibo Dong wrote:
> On Fri, Jul 04, 2025 at 08:25:12PM +0200, Andrew Lunn wrote:
> > > +/**
> > > + * mucse_fw_send_cmd_wait - Send cmd req and wait for response
> > > + * @hw: Pointer to the HW structure
> > > + * @req: Pointer to the cmd req structure
> > > + * @reply: Pointer to the fw reply structure
> > > + *
> > > + * mucse_fw_send_cmd_wait sends req to pf-cm3 mailbox and wait
> > > + * reply from fw.
> > > + *
> > > + * Returns 0 on success, negative on failure
> > > + **/
> > > +static int mucse_fw_send_cmd_wait(struct mucse_hw *hw,
> > > +				  struct mbx_fw_cmd_req *req,
> > > +				  struct mbx_fw_cmd_reply *reply)
> > > +{
> > > +	int err;
> > > +	int retry_cnt = 3;
> > > +
> > > +	if (!hw || !req || !reply || !hw->mbx.ops.read_posted)
> > 
> > Can this happen?
> > 
> > If this is not supposed to happen, it is better the driver opps, so
> > you get a stack trace and find where the driver is broken.
> > 
> Yes, it is not supposed to happen. So, you means I should remove this
> check in order to get opps when this condition happen?

You should remove all defensive code. Let is explode with an Opps, so
you can find your bugs.

> > > +		return -EINVAL;
> > > +
> > > +	/* if pcie off, nothing todo */
> > > +	if (pci_channel_offline(hw->pdev))
> > > +		return -EIO;
> > 
> > What can cause it to go offline? Is this to do with PCIe hotplug?
> > 
> Yes, I try to get a PCIe hotplug condition by 'pci_channel_offline'.
> If that happens, driver should never do bar-read/bar-write, so return
> here.

I don't know PCI hotplug too well, but i assume the driver core will
call the .release function. Can this function be called as part of
release? What actually happens on the PCI bus when you try to access a
device which no longer exists?

How have you tested this? Do you have the ability to do a hot{un}plug?

> > > +	if (mutex_lock_interruptible(&hw->mbx.lock))
> > > +		return -EAGAIN;
> > 
> > mutex_lock_interruptable() returns -EINTR, which is what you should
> > return, not -EAGAIN.
> > 
> Got it, I should return '-EINTR' here.

No, you should return whatever mutex_lock_interruptable()
returns. Whenever you call a function which returns an error code, you
should pass that error code up the call stack. Never replace one error
code with another.

> > > +	if (reply->error_code)
> > > +		return -reply->error_code;
> > 
> > The mbox is using linux error codes? 
> > 
> It is used only between driver and fw, yay be just samply like this: 
> 0     -- no error
> not 0 -- error
> So, it is not using linux error codes.

Your functions should always use linux/POSIX error codes. So if your
firmware says an error has happened, turn it into a linux/POSIX error
code. EINVAL, TIMEDOUT, EIO, whatever makes the most sense.

	Andrew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 03/15] net: rnpgbe: Add basic mbx ops support
  2025-07-07 12:00       ` Andrew Lunn
@ 2025-07-08  1:44         ` Yibo Dong
  0 siblings, 0 replies; 30+ messages in thread
From: Yibo Dong @ 2025-07-08  1:44 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Mon, Jul 07, 2025 at 02:00:16PM +0200, Andrew Lunn wrote:
> On Mon, Jul 07, 2025 at 02:39:55PM +0800, Yibo Dong wrote:
> > On Fri, Jul 04, 2025 at 08:13:19PM +0200, Andrew Lunn wrote:
> > > >  #define MBX_FEATURE_WRITE_DELAY BIT(1)
> > > >  	u32 mbx_feature;
> > > >  	/* cm3 <-> pf mbx */
> > > > -	u32 cpu_pf_shm_base;
> > > > -	u32 pf2cpu_mbox_ctrl;
> > > > -	u32 pf2cpu_mbox_mask;
> > > > -	u32 cpu_pf_mbox_mask;
> > > > -	u32 cpu2pf_mbox_vec;
> > > > +	u32 fw_pf_shm_base;
> > > > +	u32 pf2fw_mbox_ctrl;
> > > > +	u32 pf2fw_mbox_mask;
> > > > +	u32 fw_pf_mbox_mask;
> > > > +	u32 fw2pf_mbox_vec;
> > > 
> > > Why is a patch adding a new feature deleting code?
> > > 
> > Not delete code, 'cpu' here means controller in the chip, not host.
> > So, I just rename 'cpu' to 'fw' to avoid confusion.
> 
> So, so let me rephrase my point. Why was it not called fw_foo right
> from the beginning? You are making the code harder to review by doing
> stuff like this. And your code is going to need a lot of review and
> revisions because its quality if low if you ask me.
> 
> 	Andrew
> 

Ok, you are right. It should be the right name at the beginning. I will
try to avoid this in the future.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw ops support
  2025-07-07 12:09       ` Andrew Lunn
@ 2025-07-08  2:01         ` Yibo Dong
  0 siblings, 0 replies; 30+ messages in thread
From: Yibo Dong @ 2025-07-08  2:01 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: davem, edumazet, kuba, pabeni, horms, corbet, andrew+netdev,
	gur.stavi, maddy, mpe, danishanwar, lee, gongfan1, lorenzo,
	geert+renesas, Parthiban.Veerasooran, lukas.bulwahn,
	alexanderduyck, netdev, linux-doc, linux-kernel

On Mon, Jul 07, 2025 at 02:09:23PM +0200, Andrew Lunn wrote:
> On Mon, Jul 07, 2025 at 03:37:43PM +0800, Yibo Dong wrote:
> > On Fri, Jul 04, 2025 at 08:25:12PM +0200, Andrew Lunn wrote:
> > > > +/**
> > > > + * mucse_fw_send_cmd_wait - Send cmd req and wait for response
> > > > + * @hw: Pointer to the HW structure
> > > > + * @req: Pointer to the cmd req structure
> > > > + * @reply: Pointer to the fw reply structure
> > > > + *
> > > > + * mucse_fw_send_cmd_wait sends req to pf-cm3 mailbox and wait
> > > > + * reply from fw.
> > > > + *
> > > > + * Returns 0 on success, negative on failure
> > > > + **/
> > > > +static int mucse_fw_send_cmd_wait(struct mucse_hw *hw,
> > > > +				  struct mbx_fw_cmd_req *req,
> > > > +				  struct mbx_fw_cmd_reply *reply)
> > > > +{
> > > > +	int err;
> > > > +	int retry_cnt = 3;
> > > > +
> > > > +	if (!hw || !req || !reply || !hw->mbx.ops.read_posted)
> > > 
> > > Can this happen?
> > > 
> > > If this is not supposed to happen, it is better the driver opps, so
> > > you get a stack trace and find where the driver is broken.
> > > 
> > Yes, it is not supposed to happen. So, you means I should remove this
> > check in order to get opps when this condition happen?
> 
> You should remove all defensive code. Let is explode with an Opps, so
> you can find your bugs.
> 

Got it.

> > > > +		return -EINVAL;
> > > > +
> > > > +	/* if pcie off, nothing todo */
> > > > +	if (pci_channel_offline(hw->pdev))
> > > > +		return -EIO;
> > > 
> > > What can cause it to go offline? Is this to do with PCIe hotplug?
> > > 
> > Yes, I try to get a PCIe hotplug condition by 'pci_channel_offline'.
> > If that happens, driver should never do bar-read/bar-write, so return
> > here.
> 
> I don't know PCI hotplug too well, but i assume the driver core will
> call the .release function. Can this function be called as part of
> release? What actually happens on the PCI bus when you try to access a
> device which no longer exists?
> 

This function maybe called as part of release:
->release
-->unregister_netdev
--->ndo_stop
---->this function
Based on what I have come across, some devices return 0xffffffff, while 
others maybe hang when try to access a device which no longer
exists.

> How have you tested this? Do you have the ability to do a hot{un}plug?
> 

I tested hot{un}plug with an ocp-card before.
But I think all the codes related to pcie hot{un}plug should be in a
separate patch, I should move it to that patch.

> > > > +	if (mutex_lock_interruptible(&hw->mbx.lock))
> > > > +		return -EAGAIN;
> > > 
> > > mutex_lock_interruptable() returns -EINTR, which is what you should
> > > return, not -EAGAIN.
> > > 
> > Got it, I should return '-EINTR' here.
> 
> No, you should return whatever mutex_lock_interruptable()
> returns. Whenever you call a function which returns an error code, you
> should pass that error code up the call stack. Never replace one error
> code with another.
> 

Ok, I see.

> > > > +	if (reply->error_code)
> > > > +		return -reply->error_code;
> > > 
> > > The mbox is using linux error codes? 
> > > 
> > It is used only between driver and fw, yay be just samply like this: 
> > 0     -- no error
> > not 0 -- error
> > So, it is not using linux error codes.
> 
> Your functions should always use linux/POSIX error codes. So if your
> firmware says an error has happened, turn it into a linux/POSIX error
> code. EINVAL, TIMEDOUT, EIO, whatever makes the most sense.
> 
> 	Andrew
> 

Got it, I will turn it into a linux/POSIX error code.

Thanks for your feedback.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2025-07-08  2:03 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-03  1:48 [PATCH 00/15] Add driver for 1Gbe network chips from MUCSE Dong Yibo
2025-07-03  1:48 ` [PATCH 01/15] net: rnpgbe: Add build support for rnpgbe Dong Yibo
2025-07-03 16:25   ` Andrew Lunn
2025-07-04  2:10     ` Yibo Dong
2025-07-03  1:48 ` [PATCH 02/15] net: rnpgbe: Add n500/n210 chip support Dong Yibo
2025-07-04 18:03   ` Andrew Lunn
2025-07-07  1:32     ` Yibo Dong
2025-07-03  1:48 ` [PATCH 03/15] net: rnpgbe: Add basic mbx ops support Dong Yibo
2025-07-04 18:13   ` Andrew Lunn
2025-07-07  6:39     ` Yibo Dong
2025-07-07 12:00       ` Andrew Lunn
2025-07-08  1:44         ` Yibo Dong
2025-07-03  1:48 ` [PATCH 04/15] net: rnpgbe: Add get_capability mbx_fw " Dong Yibo
2025-07-04 18:25   ` Andrew Lunn
2025-07-07  7:37     ` Yibo Dong
2025-07-07 12:09       ` Andrew Lunn
2025-07-08  2:01         ` Yibo Dong
2025-07-03  1:48 ` [PATCH 05/15] net: rnpgbe: Add download firmware for n210 chip Dong Yibo
2025-07-04 18:33   ` Andrew Lunn
2025-07-07  8:14     ` Yibo Dong
2025-07-03  1:48 ` [PATCH 06/15] net: rnpgbe: Add some functions for hw->ops Dong Yibo
2025-07-03  1:48 ` [PATCH 07/15] net: rnpgbe: Add get mac from hw Dong Yibo
2025-07-03  1:48 ` [PATCH 08/15] net: rnpgbe: Add irq support Dong Yibo
2025-07-03  1:48 ` [PATCH 09/15] net: rnpgbe: Add netdev register and init tx/rx memory Dong Yibo
2025-07-03  1:48 ` [PATCH 10/15] net: rnpgbe: Add netdev irq in open Dong Yibo
2025-07-03  1:48 ` [PATCH 11/15] net: rnpgbe: Add setup hw ring-vector, true up/down hw Dong Yibo
2025-07-03  1:48 ` [PATCH 12/15] net: rnpgbe: Add link up handler Dong Yibo
2025-07-03  1:48 ` [PATCH 13/15] net: rnpgbe: Add base tx functions Dong Yibo
2025-07-03  1:48 ` [PATCH 14/15] net: rnpgbe: Add base rx function Dong Yibo
2025-07-03  1:48 ` [PATCH 15/15] net: rnpgbe: Add ITR for rx Dong Yibo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).