* [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs
@ 2026-01-23 1:17 illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang
` (15 more replies)
0 siblings, 16 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=\ry, Size: 9821 bytes --]
The patch series add the nbl driver, which will support nebula-matrix
18100 and 18110 series of network cards.
This submission is the first phase. which only supports the PF-based
Ethernet transmit and receive functionality. Once this is merged.
will submit addition patches to implement support for other
features. such as ethtool support, flow management, adminq msg, vf
support, debugfs support and etc.
chanages v2 →v3
Link to v2: https://lore.kernel.org/netdev/20260109100146.63569-1-illusion.wang@nebula-matrix.com/
1.cut down to a mini driver:
delete vf support
use promisc mode to cut down flow management
drop patch15 in v2
delete adminq msg
delete abnormal handling
delete some unimportant interfaces
2.modify issues found by ai review
Changes v1->v2
Link to v1: https://lore.kernel.org/netdev/20251223035113.31122-1-illusion.wang@nebula-matrix.com/
1.Format Issues and Compilation Issues
- Paolo Abeni
2.add sysfs patch and drop coexisting patch
- Andrew Lunn
3.delete some unimportant ndo operations
4.add machine generated headers patch
5.Modify the issues found in patch1-2 and apply the same fixes to other
patches
6.modify issues found by nipa
illusion.wang (15):
net/nebula-matrix: add minimum nbl build framework
net/nebula-matrix: add our driver architecture
net/nebula-matrix: add machine-generated headers and chip definitions
net/nebula-matrix: add channel layer implementation and the
corresponding hw interfaces
net/nebula-matrix: add resource layer common part implementation
net/nebula-matrix: add intr resource implementation
net/nebula-matrix: add queue resource implementation
net/nebula-matrix: add vsi resource implementation
net/nebula-matrix: add txrx resource interfaces and tx func
net/nebula-matrix: add txrx interrupt handling logic
net/nebula-matrix: add Dispatch layer implementation
net/nebula-matrix: add common/ctrl dev init/reinit operation
net/nebula-matrix: add net dev init/reinit operation
net/nebula-matrix: add common dev start/stop operation
net/nebula-matrix: add net dev start/stop operation
.../ethernet/nebula-matrix/m18100.rst | 47 +
MAINTAINERS | 10 +
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/nebula-matrix/Kconfig | 37 +
drivers/net/ethernet/nebula-matrix/Makefile | 6 +
.../net/ethernet/nebula-matrix/nbl/Makefile | 26 +
.../nbl/nbl_channel/nbl_channel.c | 1289 ++++++
.../nbl/nbl_channel/nbl_channel.h | 177 +
.../nebula-matrix/nbl/nbl_common/nbl_common.c | 302 ++
.../nebula-matrix/nbl/nbl_common/nbl_common.h | 29 +
.../net/ethernet/nebula-matrix/nbl/nbl_core.h | 130 +
.../nebula-matrix/nbl/nbl_core/nbl_dev.c | 1369 ++++++
.../nebula-matrix/nbl/nbl_core/nbl_dev.h | 131 +
.../nebula-matrix/nbl/nbl_core/nbl_dispatch.c | 1977 +++++++++
.../nebula-matrix/nbl/nbl_core/nbl_dispatch.h | 76 +
.../nebula-matrix/nbl/nbl_core/nbl_service.c | 1256 ++++++
.../nebula-matrix/nbl/nbl_core/nbl_service.h | 132 +
.../nebula-matrix/nbl/nbl_hw/nbl_hw.h | 172 +
.../nbl_hw/nbl_hw_leonis/base/nbl_datapath.h | 11 +
.../nbl_hw_leonis/base/nbl_datapath_dped.h | 2152 +++++++++
.../nbl_hw_leonis/base/nbl_datapath_dstore.h | 929 ++++
.../nbl_hw_leonis/base/nbl_datapath_ucar.h | 414 ++
.../nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h | 10 +
.../nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h | 665 +++
.../nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h | 1397 ++++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 1882 ++++++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h | 1142 +++++
.../nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c | 3903 +++++++++++++++++
.../nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h | 12 +
.../nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c | 1129 +++++
.../nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h | 15 +
.../nbl_hw_leonis/nbl_resource_leonis.c | 582 +++
.../nbl_hw_leonis/nbl_resource_leonis.h | 25 +
.../nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h | 139 +
.../nebula-matrix/nbl/nbl_hw/nbl_interrupt.c | 314 ++
.../nebula-matrix/nbl/nbl_hw/nbl_interrupt.h | 13 +
.../nebula-matrix/nbl/nbl_hw/nbl_queue.c | 60 +
.../nebula-matrix/nbl/nbl_hw/nbl_queue.h | 11 +
.../nebula-matrix/nbl/nbl_hw/nbl_resource.c | 299 ++
.../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 439 ++
.../nebula-matrix/nbl/nbl_hw/nbl_txrx.c | 1984 +++++++++
.../nebula-matrix/nbl/nbl_hw/nbl_txrx.h | 184 +
.../nebula-matrix/nbl/nbl_hw/nbl_vsi.c | 120 +
.../nebula-matrix/nbl/nbl_hw/nbl_vsi.h | 12 +
.../nbl/nbl_include/nbl_def_channel.h | 481 ++
.../nbl/nbl_include/nbl_def_common.h | 165 +
.../nbl/nbl_include/nbl_def_dev.h | 28 +
.../nbl/nbl_include/nbl_def_dispatch.h | 92 +
.../nbl/nbl_include/nbl_def_hw.h | 105 +
.../nbl/nbl_include/nbl_def_resource.h | 93 +
.../nbl/nbl_include/nbl_def_service.h | 80 +
.../nbl/nbl_include/nbl_include.h | 326 ++
.../nbl/nbl_include/nbl_product_base.h | 20 +
.../net/ethernet/nebula-matrix/nbl/nbl_main.c | 352 ++
55 files changed, 26753 insertions(+)
create mode 100644 Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst
create mode 100644 drivers/net/ethernet/nebula-matrix/Kconfig
create mode 100644 drivers/net/ethernet/nebula-matrix/Makefile
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/Makefile
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
--
2.47.3
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:31 ` Randy Dunlap
` (2 more replies)
2026-01-23 1:17 ` [PATCH v3 net-next 02/15] net/nebula-matrix: add our driver architecture illusion.wang
` (14 subsequent siblings)
15 siblings, 3 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
1.Add nbl min build infrastructure for nbl driver.
2.Implemented the framework of pci device initialization.
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../ethernet/nebula-matrix/m18100.rst | 47 ++++++++
MAINTAINERS | 10 ++
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/nebula-matrix/Kconfig | 37 ++++++
drivers/net/ethernet/nebula-matrix/Makefile | 6 +
.../net/ethernet/nebula-matrix/nbl/Makefile | 11 ++
.../net/ethernet/nebula-matrix/nbl/nbl_core.h | 27 +++++
.../nbl/nbl_include/nbl_include.h | 23 ++++
.../net/ethernet/nebula-matrix/nbl/nbl_main.c | 113 ++++++++++++++++++
10 files changed, 276 insertions(+)
create mode 100644 Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst
create mode 100644 drivers/net/ethernet/nebula-matrix/Kconfig
create mode 100644 drivers/net/ethernet/nebula-matrix/Makefile
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/Makefile
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
diff --git a/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst b/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst
new file mode 100644
index 000000000000..c763042ce3ee
--- /dev/null
+++ b/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst
@@ -0,0 +1,47 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+============================================================
+Linux Base Driver for Nebula-matrix M18100-NIC family
+============================================================
+
+Overview:
+=========
+M18100-NIC is a series of network interface card for the Data Center Area.
+
+The driver supports link-speed 100GbE/25GE/10GE.
+
+M18100-NIC devices support MSI-X interrupt vector for each Tx/Rx queue and
+interrupt moderation.
+
+M18100-NIC devices support also various offload features such as checksum offload,
+Receive-Side Scaling(RSS).
+
+Supported PCI vendor ID/device IDs:
+===================================
+
+1f0f:3403 - M18110 Family PF
+1f0f:3404 - M18110 Lx Family PF
+1f0f:3405 - M18110 Family BASE-T PF
+1f0f:3406 - M18110 Lx Family BASE-T PF
+1f0f:3407 - M18110 Family OCP PF
+1f0f:3408 - M18110 Lx Family OCP PF
+1f0f:3409 - M18110 Family BASE-T OCP PF
+1f0f:340a - M18110 Lx Family BASE-T OCP PF
+1f0f:340b - M18100 Family PF
+1f0f:340c - M18100 Lx Family PF
+1f0f:340d - M18100 Family BASE-T PF
+1f0f:340e - M18100 Lx Family BASE-T PF
+1f0f:340f - M18100 Family OCP PF
+1f0f:3410 - M18100 Lx Family OCP PF
+1f0f:3411 - M18100 Family BASE-T OCP PF
+1f0f:3412 - M18100 Lx Family BASE-T OCP PF
+
+Support
+=======
+
+For more information about M18100-NIC, please visit the following URL:
+https://www.nebula-matrix.com/
+
+If an issue is identified with the released source code on the supported kernel
+with a supported adapter, email the specific information related to the issue to
+open@nebula-matrix.com.
diff --git a/MAINTAINERS b/MAINTAINERS
index afc71089ba09..229b7a3c3f34 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -18018,6 +18018,16 @@ F: Documentation/devicetree/bindings/hwmon/nuvoton,nct7363.yaml
F: Documentation/hwmon/nct7363.rst
F: drivers/hwmon/nct7363.c
+NEBULA-MATRIX ETHERNET DRIVER (nebula-matrix)
+M: llusion.Wang <illusion.wang@nebula-matrix.com>
+M: Dimon.Zhao <dimon.zhao@nebula-matrix.com>
+M: Alvin.Wang <alvin.wang@nebula-matrix.com>
+M: Sam Chen <sam.chen@nebula-matrix.com>
+L: netdev@vger.kernel.org
+S: Maintained
+F: Documentation/networking/device_drivers/ethernet/nebula-matrix/*
+F: drivers/net/ethernet/nebula-matrix/
+
NETCONSOLE
M: Breno Leitao <leitao@debian.org>
S: Maintained
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
index 678beefe314a..ea2edd2811ce 100644
--- a/drivers/net/ethernet/Kconfig
+++ b/drivers/net/ethernet/Kconfig
@@ -131,6 +131,7 @@ config FEALNX
source "drivers/net/ethernet/ni/Kconfig"
source "drivers/net/ethernet/natsemi/Kconfig"
+source "drivers/net/ethernet/nebula-matrix/Kconfig"
source "drivers/net/ethernet/neterion/Kconfig"
source "drivers/net/ethernet/netronome/Kconfig"
source "drivers/net/ethernet/8390/Kconfig"
diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
index c9922628d8d6..547485409f48 100644
--- a/drivers/net/ethernet/Makefile
+++ b/drivers/net/ethernet/Makefile
@@ -68,6 +68,7 @@ obj-$(CONFIG_NET_VENDOR_MUCSE) += mucse/
obj-$(CONFIG_NET_VENDOR_MYRI) += myricom/
obj-$(CONFIG_FEALNX) += fealnx.o
obj-$(CONFIG_NET_VENDOR_NATSEMI) += natsemi/
+obj-$(CONFIG_NET_VENDOR_NEBULA_MATRIX) += nebula-matrix/
obj-$(CONFIG_NET_VENDOR_NETERION) += neterion/
obj-$(CONFIG_NET_VENDOR_NETRONOME) += netronome/
obj-$(CONFIG_NET_VENDOR_NI) += ni/
diff --git a/drivers/net/ethernet/nebula-matrix/Kconfig b/drivers/net/ethernet/nebula-matrix/Kconfig
new file mode 100644
index 000000000000..14c27c63d7ec
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/Kconfig
@@ -0,0 +1,37 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Nebula-matrix network device configuration
+#
+
+config NET_VENDOR_NEBULA_MATRIX
+ bool "Nebula-matrix devices"
+ default y
+ help
+ If you have a network (Ethernet) card belonging to this class, say Y.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Nebula-matrix cards. If you say Y, you will be
+ asked for your specific card in the following questions.
+
+if NET_VENDOR_NEBULA_MATRIX
+
+config NBL_CORE
+ tristate "Nebula-matrix Ethernet Controller m18100 Family support"
+ depends on 64BIT && PCI
+ default m
+ select PAGE_POOL
+ help
+ This driver supports Nebula-matrix Ethernet Controller m18100 Family of
+ devices. For more information about this product, go to the product
+ description with smart NIC:
+
+ <http://www.nebula-matrix.com>
+
+ More specific information on configuring the driver is in
+ <file:Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst>.
+
+ To compile this driver as a module, choose M here. The module
+ will be called nbl_core.
+
+endif # NET_VENDOR_NEBULA_MATRIX
diff --git a/drivers/net/ethernet/nebula-matrix/Makefile b/drivers/net/ethernet/nebula-matrix/Makefile
new file mode 100644
index 000000000000..dc6bf7dcd6bf
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/Makefile
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the Nebula-matrix network device drivers.
+#
+
+obj-$(CONFIG_NBL_CORE) += nbl/
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
new file mode 100644
index 000000000000..df16a3436a5c
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2025 Nebula Matrix Limited.
+# Author:
+
+obj-$(CONFIG_NBL_CORE) := nbl_core.o
+
+nbl_core-objs += nbl_main.o
+
+# Provide include files
+ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
new file mode 100644
index 000000000000..51aa4d87146f
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_CORE_H_
+#define _NBL_CORE_H_
+
+#include <linux/pci.h>
+#include "nbl_include.h"
+#define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1)
+
+#define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT)
+#define NBL_CAP_IS_NET(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_NET_BIT)
+#define NBL_CAP_IS_NIC(val) NBL_CAP_TEST_BIT(val, NBL_CAP_IS_NIC_BIT)
+#define NBL_CAP_IS_OCP(val) NBL_CAP_TEST_BIT(val, NBL_CAP_IS_OCP_BIT)
+#define NBL_CAP_IS_LEONIS(val) NBL_CAP_TEST_BIT(val, NBL_CAP_IS_LEONIS_BIT)
+
+enum {
+ NBL_CAP_HAS_CTRL_BIT = 0,
+ NBL_CAP_HAS_NET_BIT,
+ NBL_CAP_IS_NIC_BIT,
+ NBL_CAP_IS_LEONIS_BIT,
+ NBL_CAP_IS_OCP_BIT,
+};
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
new file mode 100644
index 000000000000..f12bf019dfee
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_INCLUDE_H_
+#define _NBL_INCLUDE_H_
+
+#include <linux/types.h>
+
+/* ------ Basic definitions ------- */
+#define NBL_DRIVER_NAME "nbl_core"
+
+struct nbl_func_caps {
+ u32 has_ctrl:1;
+ u32 has_net:1;
+ u32 is_nic:1;
+ u32 is_ocp:1;
+ u32 rsv:28;
+};
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
new file mode 100644
index 000000000000..2e3026fc3250
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
@@ -0,0 +1,113 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include <linux/aer.h>
+#include "nbl_core.h"
+
+static int nbl_probe(struct pci_dev *pdev,
+ const struct pci_device_id __always_unused *id)
+{
+ struct device *dev = &pdev->dev;
+
+ dev_dbg(dev, "nbl probe ok!\n");
+ return 0;
+}
+
+static void nbl_remove(struct pci_dev *pdev)
+{
+ dev_dbg(&pdev->dev, "nbl remove OK!\n");
+}
+
+#define NBL_VENDOR_ID (0x1F0F)
+
+/*
+ * Leonis DeviceID
+ * 0x3403-0x340d for snic v3r1 product
+ */
+#define NBL_DEVICE_ID_M18110 (0x3403)
+#define NBL_DEVICE_ID_M18110_LX (0x3404)
+#define NBL_DEVICE_ID_M18110_BASE_T (0x3405)
+#define NBL_DEVICE_ID_M18110_LX_BASE_T (0x3406)
+#define NBL_DEVICE_ID_M18110_OCP (0x3407)
+#define NBL_DEVICE_ID_M18110_LX_OCP (0x3408)
+#define NBL_DEVICE_ID_M18110_BASE_T_OCP (0x3409)
+#define NBL_DEVICE_ID_M18110_LX_BASE_T_OCP (0x340a)
+#define NBL_DEVICE_ID_M18000 (0x340b)
+#define NBL_DEVICE_ID_M18000_LX (0x340c)
+#define NBL_DEVICE_ID_M18000_BASE_T (0x340d)
+#define NBL_DEVICE_ID_M18000_LX_BASE_T (0x340e)
+#define NBL_DEVICE_ID_M18000_OCP (0x340f)
+#define NBL_DEVICE_ID_M18000_LX_OCP (0x3410)
+#define NBL_DEVICE_ID_M18000_BASE_T_OCP (0x3411)
+#define NBL_DEVICE_ID_M18000_LX_BASE_T_OCP (0x3412)
+
+static const struct pci_device_id nbl_id_table[] = {
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_BASE_T),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_BASE_T),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_OCP),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_OCP),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_BASE_T_OCP),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_BASE_T_OCP),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_LX),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_BASE_T),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_LX_BASE_T),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_OCP),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_LX_OCP),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_BASE_T_OCP),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) },
+ { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_LX_BASE_T_OCP),
+ .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) |
+ BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) },
+ /* required as sentinel */
+ {
+ 0,
+ }
+};
+MODULE_DEVICE_TABLE(pci, nbl_id_table);
+
+static struct pci_driver nbl_driver = {
+ .name = NBL_DRIVER_NAME,
+ .id_table = nbl_id_table,
+ .probe = nbl_probe,
+ .remove = nbl_remove,
+};
+
+module_pci_driver(nbl_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Nebula Matrix Network Driver");
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 02/15] net/nebula-matrix: add our driver architecture
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 3:57 ` Andrew Lunn
2026-01-23 1:17 ` [PATCH v3 net-next 03/15] net/nebula-matrix: add machine-generated headers and chip definitions illusion.wang
` (13 subsequent siblings)
15 siblings, 1 reply; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
Patch Content:
Hardware (HW), Channel, Resource, Dispatch, Service, and Device Layer
Struct Initialization/Deinitialization, and Operation Set Registration/
Unregistration
Our driver architecture is relatively complex because the code is highly
reusable and designed to support multiple features. Additionally, the
codebase supports multiple chip variants, each with distinct
hardware-software interactions.
To ensure compatibility, our architecture is divided into the following
layers:
1. Dev Layer (Device Layer)
The top-level business logic layer where all operations are
device-centric. Every operation is performed relative to the device
context. The intergration of base functions encompasses:
management(ctrl_dev only for leonis pf0), network(net_dev), common.
2. Service Layer
The Service layer includes various netops services such as packet
receiving/sending, management services, etc. These are provided to the
upper layer for use or registered as the operations(ops) of related devices.
It describes all the service capabilities possessed by the device.
3. Dispatch Layer
The distribution from services to specific data operations is mainly
divided into two types: direct pass-through and handling by the
management PF. It shields the upper layer from the differences in
specific underlying locations.
It describes the processing locations and paths of the services.
4. Resource Layer
Handles tasks dispatched from Dispatch Layer. These tasks fall into two
categories:
4.1 Hardware control
The Resource Layer further invokes the HW Layer when hardware access is
needed, as only the HW Layer has OS-level privileges.
4.2 Software resource management
Operations like packet statistics collection that don't require hardware
access.
5. HW Layer (Hardware Layer)
Serves the Resource Layer by interacting with different hardware
chipsets.Writes to hardware registers to drive the hardware based on
Resource Layer directives.
6. Channel Layer
Handle communication between PF0 and other PF(Our PF0 has ctrl func),and provide
basic interaction channels.
7. Common Layer
Provides fundamental services, including Workqueue management,debug logging,
and so on
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/Makefile | 9 +-
.../nbl/nbl_channel/nbl_channel.c | 135 ++++++++++++
.../nbl/nbl_channel/nbl_channel.h | 45 ++++
.../net/ethernet/nebula-matrix/nbl/nbl_core.h | 61 +++++-
.../nebula-matrix/nbl/nbl_core/nbl_dev.c | 112 ++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_dev.h | 39 ++++
.../nebula-matrix/nbl/nbl_core/nbl_dispatch.c | 114 ++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_dispatch.h | 43 ++++
.../nebula-matrix/nbl/nbl_core/nbl_service.c | 118 ++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_service.h | 37 ++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 177 +++++++++++++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h | 13 ++
.../nbl_hw_leonis/nbl_resource_leonis.c | 152 +++++++++++++
.../nbl_hw_leonis/nbl_resource_leonis.h | 12 +
.../nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h | 139 ++++++++++++
.../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 55 +++++
.../nbl/nbl_include/nbl_def_channel.h | 32 +++
.../nbl/nbl_include/nbl_def_common.h | 73 ++++++
.../nbl/nbl_include/nbl_def_dev.h | 28 +++
.../nbl/nbl_include/nbl_def_dispatch.h | 34 +++
.../nbl/nbl_include/nbl_def_hw.h | 23 ++
.../nbl/nbl_include/nbl_def_resource.h | 28 +++
.../nbl/nbl_include/nbl_def_service.h | 24 ++
.../nbl/nbl_include/nbl_include.h | 13 +-
.../nbl/nbl_include/nbl_product_base.h | 20 ++
.../net/ethernet/nebula-matrix/nbl/nbl_main.c | 207 ++++++++++++++++++
26 files changed, 1740 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
index df16a3436a5c..5162d1f12b51 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -4,8 +4,15 @@
obj-$(CONFIG_NBL_CORE) := nbl_core.o
-nbl_core-objs += nbl_main.o
+nbl_core-objs += nbl_channel/nbl_channel.o \
+ nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \
+ nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \
+ nbl_core/nbl_dispatch.o \
+ nbl_core/nbl_service.o \
+ nbl_core/nbl_dev.o \
+ nbl_main.o
# Provide include files
ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw
ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c
new file mode 100644
index 000000000000..a262a765c141
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+#include <linux/delay.h>
+#include "nbl_channel.h"
+
+static struct nbl_channel_ops chan_ops = {
+};
+
+static int
+nbl_chan_setup_chan_mgt(struct nbl_adapter *adapter,
+ struct nbl_init_param *param,
+ struct nbl_channel_mgt_leonis **chan_mgt_leonis)
+{
+ struct nbl_common_info *common;
+ struct nbl_hw_ops_tbl *hw_ops_tbl;
+ struct nbl_chan_info *mailbox;
+ struct device *dev;
+
+ dev = NBL_ADAP_TO_DEV(adapter);
+ common = NBL_ADAP_TO_COMMON(adapter);
+ hw_ops_tbl = NBL_ADAP_TO_HW_OPS_TBL(adapter);
+
+ *chan_mgt_leonis = devm_kzalloc(dev,
+ sizeof(struct nbl_channel_mgt_leonis),
+ GFP_KERNEL);
+ if (!*chan_mgt_leonis)
+ goto alloc_channel_mgt_leonis_fail;
+
+ NBL_CHAN_MGT_TO_COMMON(&(*chan_mgt_leonis)->chan_mgt) = common;
+ (*chan_mgt_leonis)->chan_mgt.hw_ops_tbl = hw_ops_tbl;
+
+ mailbox = devm_kzalloc(dev, sizeof(struct nbl_chan_info), GFP_KERNEL);
+ if (!mailbox)
+ goto alloc_mailbox_fail;
+ mailbox->chan_type = NBL_CHAN_TYPE_MAILBOX;
+ NBL_CHAN_MGT_TO_MBX(&(*chan_mgt_leonis)->chan_mgt) = mailbox;
+
+ return 0;
+alloc_mailbox_fail:
+ devm_kfree(dev, *chan_mgt_leonis);
+ *chan_mgt_leonis = NULL;
+alloc_channel_mgt_leonis_fail:
+ return -ENOMEM;
+}
+
+static void
+nbl_chan_remove_chan_mgt(struct nbl_common_info *common,
+ struct nbl_channel_mgt_leonis **chan_mgt_leonis)
+{
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+
+ devm_kfree(dev, NBL_CHAN_MGT_TO_MBX(&(*chan_mgt_leonis)->chan_mgt));
+
+ /* check and remove command queue */
+ devm_kfree(dev, *chan_mgt_leonis);
+ *chan_mgt_leonis = NULL;
+}
+
+static void nbl_chan_remove_ops(struct device *dev,
+ struct nbl_channel_ops_tbl **chan_ops_tbl)
+{
+ if (!dev || !chan_ops_tbl)
+ return;
+
+ devm_kfree(dev, *chan_ops_tbl);
+ *chan_ops_tbl = NULL;
+}
+
+static int nbl_chan_setup_ops(struct device *dev,
+ struct nbl_channel_ops_tbl **chan_ops_tbl,
+ struct nbl_channel_mgt_leonis *chan_mgt)
+{
+ if (!chan_mgt)
+ return 0;
+ *chan_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_channel_ops_tbl),
+ GFP_KERNEL);
+ if (!*chan_ops_tbl)
+ return -ENOMEM;
+
+ NBL_CHAN_OPS_TBL_TO_OPS(*chan_ops_tbl) = &chan_ops;
+ NBL_CHAN_OPS_TBL_TO_PRIV(*chan_ops_tbl) = chan_mgt;
+ return 0;
+}
+
+int nbl_chan_init_common(void *p, struct nbl_init_param *param)
+{
+ struct nbl_adapter *adap = (struct nbl_adapter *)p;
+ struct nbl_channel_mgt_leonis **chan_mgt_leonis;
+ struct nbl_channel_ops_tbl **chan_ops_tbl;
+ struct nbl_common_info *common;
+ struct device *dev;
+ int ret = 0;
+
+ dev = NBL_ADAP_TO_DEV(adap);
+ common = NBL_ADAP_TO_COMMON(adap);
+ chan_mgt_leonis =
+ (struct nbl_channel_mgt_leonis **)&NBL_ADAP_TO_CHAN_MGT(adap);
+ chan_ops_tbl = &NBL_ADAP_TO_CHAN_OPS_TBL(adap);
+
+ ret = nbl_chan_setup_chan_mgt(adap, param, chan_mgt_leonis);
+ if (ret)
+ goto setup_mgt_fail;
+
+ ret = nbl_chan_setup_ops(dev, chan_ops_tbl, *chan_mgt_leonis);
+ if (ret)
+ goto setup_ops_fail;
+
+ return 0;
+
+setup_ops_fail:
+ nbl_chan_remove_chan_mgt(common, chan_mgt_leonis);
+setup_mgt_fail:
+ return ret;
+}
+
+void nbl_chan_remove_common(void *p)
+{
+ struct nbl_adapter *adap = (struct nbl_adapter *)p;
+ struct nbl_channel_mgt_leonis **chan_mgt_leonis;
+ struct nbl_channel_ops_tbl **chan_ops_tbl;
+ struct nbl_common_info *common;
+ struct device *dev;
+
+ dev = NBL_ADAP_TO_DEV(adap);
+ common = NBL_ADAP_TO_COMMON(adap);
+ chan_mgt_leonis =
+ (struct nbl_channel_mgt_leonis **)&NBL_ADAP_TO_CHAN_MGT(adap);
+ chan_ops_tbl = &NBL_ADAP_TO_CHAN_OPS_TBL(adap);
+
+ nbl_chan_remove_chan_mgt(common, chan_mgt_leonis);
+ nbl_chan_remove_ops(dev, chan_ops_tbl);
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h
new file mode 100644
index 000000000000..d207ce79d89f
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_CHANNEL_H_
+#define _NBL_CHANNEL_H_
+
+#include "nbl_core.h"
+#define NBL_CHAN_MGT_TO_COMMON(chan_mgt) ((chan_mgt)->common)
+#define NBL_CHAN_MGT_TO_DEV(chan_mgt) \
+ NBL_COMMON_TO_DEV(NBL_CHAN_MGT_TO_COMMON(chan_mgt))
+#define NBL_CHAN_MGT_TO_HW_OPS_TBL(chan_mgt) ((chan_mgt)->hw_ops_tbl)
+#define NBL_CHAN_MGT_TO_HW_OPS(chan_mgt) \
+ (NBL_CHAN_MGT_TO_HW_OPS_TBL(chan_mgt)->ops)
+#define NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt) \
+ (NBL_CHAN_MGT_TO_HW_OPS_TBL(chan_mgt)->priv)
+#define NBL_CHAN_MGT_TO_MBX(chan_mgt) \
+ ((chan_mgt)->chan_info[NBL_CHAN_TYPE_MAILBOX])
+
+#define NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type) \
+ ((chan_mgt)->chan_info[chan_type])
+
+struct nbl_chan_info {
+ u8 chan_type;
+
+};
+
+struct nbl_channel_mgt {
+ struct nbl_common_info *common;
+ struct nbl_hw_ops_tbl *hw_ops_tbl;
+ struct nbl_chan_info *chan_info[NBL_CHAN_TYPE_MAX];
+ void *handle_hash_tbl;
+};
+
+/* Mgt structure for each product.
+ * Every indivisual mgt must have the common mgt as its first member, and
+ * contains its unique data structure in the reset of it.
+ */
+struct nbl_channel_mgt_leonis {
+ struct nbl_channel_mgt chan_mgt;
+};
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
index 51aa4d87146f..9db7c9b4a739 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
@@ -8,7 +8,32 @@
#define _NBL_CORE_H_
#include <linux/pci.h>
-#include "nbl_include.h"
+#include "nbl_product_base.h"
+#include "nbl_def_channel.h"
+#include "nbl_def_hw.h"
+#include "nbl_def_resource.h"
+#include "nbl_def_dispatch.h"
+#include "nbl_def_service.h"
+#include "nbl_def_dev.h"
+#include "nbl_def_common.h"
+
+#define NBL_ADAP_TO_PDEV(adapter) ((adapter)->pdev)
+#define NBL_ADAP_TO_DEV(adapter) (&((adapter)->pdev->dev))
+#define NBL_ADAP_TO_COMMON(adapter) (&((adapter)->common))
+#define NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter) ((adapter)->product_base_ops)
+
+#define NBL_ADAP_TO_HW_MGT(adapter) ((adapter)->core.hw_mgt)
+#define NBL_ADAP_TO_RES_MGT(adapter) ((adapter)->core.res_mgt)
+#define NBL_ADAP_TO_DISP_MGT(adapter) ((adapter)->core.disp_mgt)
+#define NBL_ADAP_TO_SERV_MGT(adapter) ((adapter)->core.serv_mgt)
+#define NBL_ADAP_TO_DEV_MGT(adapter) ((adapter)->core.dev_mgt)
+#define NBL_ADAP_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt)
+#define NBL_ADAP_TO_HW_OPS_TBL(adapter) ((adapter)->intf.hw_ops_tbl)
+#define NBL_ADAP_TO_RES_OPS_TBL(adapter) ((adapter)->intf.resource_ops_tbl)
+#define NBL_ADAP_TO_DISP_OPS_TBL(adapter) ((adapter)->intf.dispatch_ops_tbl)
+#define NBL_ADAP_TO_SERV_OPS_TBL(adapter) ((adapter)->intf.service_ops_tbl)
+#define NBL_ADAP_TO_DEV_OPS_TBL(adapter) ((adapter)->intf.dev_ops_tbl)
+#define NBL_ADAP_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl)
#define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1)
#define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT)
@@ -24,4 +49,38 @@ enum {
NBL_CAP_IS_LEONIS_BIT,
NBL_CAP_IS_OCP_BIT,
};
+
+struct nbl_interface {
+ struct nbl_hw_ops_tbl *hw_ops_tbl;
+ struct nbl_resource_ops_tbl *resource_ops_tbl;
+ struct nbl_dispatch_ops_tbl *dispatch_ops_tbl;
+ struct nbl_service_ops_tbl *service_ops_tbl;
+ struct nbl_dev_ops_tbl *dev_ops_tbl;
+ struct nbl_channel_ops_tbl *channel_ops_tbl;
+};
+
+struct nbl_core {
+ void *hw_mgt;
+ void *res_mgt;
+ void *disp_mgt;
+ void *serv_mgt;
+ void *dev_mgt;
+ void *chan_mgt;
+};
+
+struct nbl_adapter {
+ struct pci_dev *pdev;
+ struct nbl_core core;
+ struct nbl_interface intf;
+ struct nbl_common_info common;
+ struct nbl_product_base_ops *product_base_ops;
+ struct nbl_init_param init_param;
+};
+
+struct nbl_adapter *nbl_core_init(struct pci_dev *pdev,
+ struct nbl_init_param *param);
+void nbl_core_remove(struct nbl_adapter *adapter);
+int nbl_core_start(struct nbl_adapter *adapter, struct nbl_init_param *param);
+void nbl_core_stop(struct nbl_adapter *adapter);
+
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
new file mode 100644
index 000000000000..dbeffcfe397e
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
@@ -0,0 +1,112 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/rtnetlink.h>
+#include <linux/if_vlan.h>
+
+#include "nbl_dev.h"
+
+static struct nbl_dev_ops dev_ops;
+
+static int nbl_dev_setup_dev_mgt(struct nbl_common_info *common,
+ struct nbl_dev_mgt **dev_mgt)
+{
+ *dev_mgt = devm_kzalloc(NBL_COMMON_TO_DEV(common),
+ sizeof(struct nbl_dev_mgt), GFP_KERNEL);
+ if (!*dev_mgt)
+ return -ENOMEM;
+
+ (*dev_mgt)->common = common;
+ return 0;
+}
+
+static void nbl_dev_remove_dev_mgt(struct nbl_common_info *common,
+ struct nbl_dev_mgt **dev_mgt)
+{
+ devm_kfree(NBL_COMMON_TO_DEV(common), *dev_mgt);
+ *dev_mgt = NULL;
+}
+
+static void nbl_dev_remove_ops(struct device *dev,
+ struct nbl_dev_ops_tbl **dev_ops_tbl)
+{
+ devm_kfree(dev, *dev_ops_tbl);
+ *dev_ops_tbl = NULL;
+}
+
+static int nbl_dev_setup_ops(struct device *dev,
+ struct nbl_dev_ops_tbl **dev_ops_tbl,
+ struct nbl_adapter *adapter)
+{
+ *dev_ops_tbl =
+ devm_kzalloc(dev, sizeof(struct nbl_dev_ops_tbl), GFP_KERNEL);
+ if (!*dev_ops_tbl)
+ return -ENOMEM;
+
+ (*dev_ops_tbl)->ops = &dev_ops;
+ (*dev_ops_tbl)->priv = adapter;
+
+ return 0;
+}
+
+int nbl_dev_init(void *p, struct nbl_init_param *param)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct device *dev = NBL_ADAP_TO_DEV(adapter);
+ struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter);
+ struct nbl_dev_mgt **dev_mgt =
+ (struct nbl_dev_mgt **)&NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_dev_ops_tbl **dev_ops_tbl =
+ &NBL_ADAP_TO_DEV_OPS_TBL(adapter);
+ struct nbl_service_ops_tbl *serv_ops_tbl =
+ NBL_ADAP_TO_SERV_OPS_TBL(adapter);
+ struct nbl_channel_ops_tbl *chan_ops_tbl =
+ NBL_ADAP_TO_CHAN_OPS_TBL(adapter);
+ int ret;
+
+ ret = nbl_dev_setup_dev_mgt(common, dev_mgt);
+ if (ret)
+ goto setup_mgt_fail;
+
+ (*dev_mgt)->serv_ops_tbl = serv_ops_tbl;
+ (*dev_mgt)->chan_ops_tbl = chan_ops_tbl;
+ ret = nbl_dev_setup_ops(dev, dev_ops_tbl, adapter);
+ if (ret)
+ goto setup_ops_fail;
+
+ return 0;
+
+setup_ops_fail:
+
+ nbl_dev_remove_dev_mgt(common, dev_mgt);
+setup_mgt_fail:
+ return ret;
+}
+
+void nbl_dev_remove(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct device *dev = NBL_ADAP_TO_DEV(adapter);
+ struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter);
+ struct nbl_dev_mgt **dev_mgt =
+ (struct nbl_dev_mgt **)&NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_dev_ops_tbl **dev_ops_tbl =
+ &NBL_ADAP_TO_DEV_OPS_TBL(adapter);
+
+ nbl_dev_remove_ops(dev, dev_ops_tbl);
+
+ nbl_dev_remove_dev_mgt(common, dev_mgt);
+}
+
+int nbl_dev_start(void *p, struct nbl_init_param *param)
+{
+ return 0;
+}
+
+void nbl_dev_stop(void *p)
+{
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h
new file mode 100644
index 000000000000..8a7c85db8ceb
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEV_H_
+#define _NBL_DEV_H_
+
+#include "nbl_core.h"
+
+#define NBL_DEV_MGT_TO_COMMON(dev_mgt) ((dev_mgt)->common)
+#define NBL_DEV_MGT_TO_DEV(dev_mgt) \
+ NBL_COMMON_TO_DEV(NBL_DEV_MGT_TO_COMMON(dev_mgt))
+#define NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt) ((dev_mgt)->common_dev)
+#define NBL_DEV_MGT_TO_NET_DEV(dev_mgt) ((dev_mgt)->net_dev)
+#define NBL_DEV_COMMON_TO_MSIX_INFO(dev_common) (&(dev_common)->msix_info)
+#define NBL_DEV_MGT_TO_NETDEV_OPS(dev_mgt) ((dev_mgt)->net_dev->ops)
+
+#define NBL_DEV_MGT_TO_SERV_OPS_TBL(dev_mgt) ((dev_mgt)->serv_ops_tbl)
+#define NBL_DEV_MGT_TO_SERV_OPS(dev_mgt) \
+ (NBL_DEV_MGT_TO_SERV_OPS_TBL(dev_mgt)->ops)
+#define NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt) \
+ (NBL_DEV_MGT_TO_SERV_OPS_TBL(dev_mgt)->priv)
+#define NBL_DEV_MGT_TO_RES_PT_OPS(dev_mgt) \
+ (&(NBL_DEV_MGT_TO_SERV_OPS_TBL(dev_mgt)->pt_ops))
+#define NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt) ((dev_mgt)->chan_ops_tbl)
+#define NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt) \
+ (NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->ops)
+#define NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt) \
+ (NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->priv)
+struct nbl_dev_mgt {
+ struct nbl_common_info *common;
+ struct nbl_service_ops_tbl *serv_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+
+};
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c
new file mode 100644
index 000000000000..51cb298ffb1a
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c
@@ -0,0 +1,114 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+#include <linux/etherdevice.h>
+#include "nbl_dispatch.h"
+
+static int nbl_disp_setup_disp_mgt(struct nbl_common_info *common,
+ struct nbl_dispatch_mgt **disp_mgt)
+{
+ struct device *dev;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ *disp_mgt =
+ devm_kzalloc(dev, sizeof(struct nbl_dispatch_mgt), GFP_KERNEL);
+ if (!*disp_mgt)
+ return -ENOMEM;
+
+ NBL_DISP_MGT_TO_COMMON(*disp_mgt) = common;
+ return 0;
+}
+
+static void nbl_disp_remove_disp_mgt(struct nbl_common_info *common,
+ struct nbl_dispatch_mgt **disp_mgt)
+{
+ struct device *dev;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ devm_kfree(dev, *disp_mgt);
+ *disp_mgt = NULL;
+}
+
+static void nbl_disp_remove_ops(struct device *dev,
+ struct nbl_dispatch_ops_tbl **disp_ops_tbl)
+{
+ devm_kfree(dev, NBL_DISP_OPS_TBL_TO_OPS(*disp_ops_tbl));
+ devm_kfree(dev, *disp_ops_tbl);
+ *disp_ops_tbl = NULL;
+}
+
+static int nbl_disp_setup_ops(struct device *dev,
+ struct nbl_dispatch_ops_tbl **disp_ops_tbl,
+ struct nbl_dispatch_mgt *disp_mgt)
+{
+ struct nbl_dispatch_ops *disp_ops;
+
+ *disp_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_dispatch_ops_tbl),
+ GFP_KERNEL);
+ if (!*disp_ops_tbl)
+ return -ENOMEM;
+
+ disp_ops =
+ devm_kzalloc(dev, sizeof(struct nbl_dispatch_ops), GFP_KERNEL);
+ if (!disp_ops)
+ return -ENOMEM;
+
+ NBL_DISP_OPS_TBL_TO_OPS(*disp_ops_tbl) = disp_ops;
+ NBL_DISP_OPS_TBL_TO_PRIV(*disp_ops_tbl) = disp_mgt;
+
+ return 0;
+}
+
+int nbl_disp_init(void *p, struct nbl_init_param *param)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_dispatch_mgt **disp_mgt =
+ (struct nbl_dispatch_mgt **)&NBL_ADAP_TO_DISP_MGT(adapter);
+ struct nbl_dispatch_ops_tbl **disp_ops_tbl =
+ &NBL_ADAP_TO_DISP_OPS_TBL(adapter);
+ struct nbl_resource_ops_tbl *res_ops_tbl =
+ NBL_ADAP_TO_RES_OPS_TBL(adapter);
+ struct nbl_channel_ops_tbl *chan_ops_tbl =
+ NBL_ADAP_TO_CHAN_OPS_TBL(adapter);
+ struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter);
+ struct device *dev = NBL_ADAP_TO_DEV(adapter);
+ int ret;
+
+ ret = nbl_disp_setup_disp_mgt(common, disp_mgt);
+ if (ret)
+ goto setup_mgt_fail;
+
+ ret = nbl_disp_setup_ops(dev, disp_ops_tbl, *disp_mgt);
+ if (ret)
+ goto setup_ops_fail;
+
+ NBL_DISP_MGT_TO_RES_OPS_TBL(*disp_mgt) = res_ops_tbl;
+ NBL_DISP_MGT_TO_CHAN_OPS_TBL(*disp_mgt) = chan_ops_tbl;
+ NBL_DISP_MGT_TO_DISP_OPS_TBL(*disp_mgt) = *disp_ops_tbl;
+
+ return 0;
+setup_ops_fail:
+ nbl_disp_remove_disp_mgt(common, disp_mgt);
+setup_mgt_fail:
+ return ret;
+}
+
+void nbl_disp_remove(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_dispatch_ops_tbl **disp_ops_tbl;
+ struct nbl_dispatch_mgt **disp_mgt;
+ struct nbl_common_info *common;
+ struct device *dev;
+
+ dev = NBL_ADAP_TO_DEV(adapter);
+ common = NBL_ADAP_TO_COMMON(adapter);
+ disp_mgt = (struct nbl_dispatch_mgt **)&NBL_ADAP_TO_DISP_MGT(adapter);
+ disp_ops_tbl = &NBL_ADAP_TO_DISP_OPS_TBL(adapter);
+
+ nbl_disp_remove_ops(dev, disp_ops_tbl);
+
+ nbl_disp_remove_disp_mgt(common, disp_mgt);
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h
new file mode 100644
index 000000000000..21cfb5a1000e
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DISPATCH_H_
+#define _NBL_DISPATCH_H_
+
+#include "nbl_core.h"
+
+#define NBL_DISP_MGT_TO_COMMON(disp_mgt) ((disp_mgt)->common)
+#define NBL_DISP_MGT_TO_DEV(disp_mgt) \
+ NBL_COMMON_TO_DEV(NBL_DISP_MGT_TO_COMMON(disp_mgt))
+
+#define NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt) ((disp_mgt)->res_ops_tbl)
+#define NBL_DISP_MGT_TO_RES_OPS(disp_mgt) \
+ (NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt)->ops)
+#define NBL_DISP_MGT_TO_RES_PRIV(disp_mgt) \
+ (NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt)->priv)
+#define NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt) ((disp_mgt)->chan_ops_tbl)
+#define NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt) \
+ (NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt)->ops)
+#define NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt) \
+ (NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt)->priv)
+#define NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt) ((disp_mgt)->disp_ops_tbl)
+#define NBL_DISP_MGT_TO_DISP_OPS(disp_mgt) \
+ (NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->ops)
+#define NBL_DISP_MGT_TO_DISP_PRIV(disp_mgt) \
+ (NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->priv)
+
+struct nbl_dispatch_mgt {
+ struct nbl_common_info *common;
+ struct nbl_resource_ops_tbl *res_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_dispatch_ops_tbl *disp_ops_tbl;
+ DECLARE_BITMAP(ctrl_lvl, NBL_DISP_CTRL_LVL_MAX);
+ /* use for the caller not in interrupt */
+ struct mutex ops_mutex_lock;
+ bool ops_lock_required;
+};
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
new file mode 100644
index 000000000000..a0adfb95382c
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+#include <crypto/hash.h>
+#include <linux/etherdevice.h>
+#include <linux/ip.h>
+#include <net/ipv6.h>
+#include <linux/sctp.h>
+#include <linux/rtnetlink.h>
+#include <linux/if_vlan.h>
+
+#include "nbl_service.h"
+static struct nbl_service_ops serv_ops = {
+};
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_serv_setup_serv_mgt(struct nbl_common_info *common,
+ struct nbl_service_mgt **serv_mgt)
+{
+ struct device *dev;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ *serv_mgt =
+ devm_kzalloc(dev, sizeof(struct nbl_service_mgt), GFP_KERNEL);
+ if (!*serv_mgt)
+ return -ENOMEM;
+
+ NBL_SERV_MGT_TO_COMMON(*serv_mgt) = common;
+
+ return 0;
+}
+
+static void nbl_serv_remove_serv_mgt(struct nbl_common_info *common,
+ struct nbl_service_mgt **serv_mgt)
+{
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+
+ devm_kfree(dev, *serv_mgt);
+ *serv_mgt = NULL;
+}
+
+static void nbl_serv_remove_ops(struct device *dev,
+ struct nbl_service_ops_tbl **serv_ops_tbl)
+{
+ devm_kfree(dev, *serv_ops_tbl);
+ *serv_ops_tbl = NULL;
+}
+
+static int nbl_serv_setup_ops(struct device *dev,
+ struct nbl_service_ops_tbl **serv_ops_tbl,
+ struct nbl_service_mgt *serv_mgt)
+{
+ *serv_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_service_ops_tbl),
+ GFP_KERNEL);
+ if (!*serv_ops_tbl)
+ return -ENOMEM;
+
+ (*serv_ops_tbl)->ops = &serv_ops;
+ (*serv_ops_tbl)->priv = serv_mgt;
+
+ return 0;
+}
+
+int nbl_serv_init(void *p, struct nbl_init_param *param)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct device *dev;
+ struct nbl_common_info *common;
+ struct nbl_service_mgt **serv_mgt;
+ struct nbl_service_ops_tbl **serv_ops_tbl;
+ struct nbl_dispatch_ops_tbl *disp_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ int ret = 0;
+
+ dev = NBL_ADAP_TO_DEV(adapter);
+ common = NBL_ADAP_TO_COMMON(adapter);
+ serv_mgt = (struct nbl_service_mgt **)&NBL_ADAP_TO_SERV_MGT(adapter);
+ serv_ops_tbl = &NBL_ADAP_TO_SERV_OPS_TBL(adapter);
+ disp_ops_tbl = NBL_ADAP_TO_DISP_OPS_TBL(adapter);
+ chan_ops_tbl = NBL_ADAP_TO_CHAN_OPS_TBL(adapter);
+
+ ret = nbl_serv_setup_serv_mgt(common, serv_mgt);
+ if (ret)
+ goto setup_mgt_fail;
+
+ ret = nbl_serv_setup_ops(dev, serv_ops_tbl, *serv_mgt);
+ if (ret)
+ goto setup_ops_fail;
+
+ (*serv_mgt)->disp_ops_tbl = disp_ops_tbl;
+ (*serv_mgt)->chan_ops_tbl = chan_ops_tbl;
+
+ return 0;
+
+setup_ops_fail:
+ nbl_serv_remove_serv_mgt(common, serv_mgt);
+setup_mgt_fail:
+ return ret;
+}
+
+void nbl_serv_remove(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct device *dev;
+ struct nbl_common_info *common;
+ struct nbl_service_mgt **serv_mgt;
+ struct nbl_service_ops_tbl **serv_ops_tbl;
+
+ dev = NBL_ADAP_TO_DEV(adapter);
+ common = NBL_ADAP_TO_COMMON(adapter);
+ serv_mgt = (struct nbl_service_mgt **)&NBL_ADAP_TO_SERV_MGT(adapter);
+ serv_ops_tbl = &NBL_ADAP_TO_SERV_OPS_TBL(adapter);
+
+ nbl_serv_remove_ops(dev, serv_ops_tbl);
+ nbl_serv_remove_serv_mgt(common, serv_mgt);
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
new file mode 100644
index 000000000000..083350bd50ba
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_SERVICE_H_
+#define _NBL_SERVICE_H_
+
+#include <linux/mm.h>
+#include <linux/ptr_ring.h>
+#include "nbl_core.h"
+
+#define NBL_SERV_MGT_TO_COMMON(serv_mgt) ((serv_mgt)->common)
+#define NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt) ((serv_mgt)->chan_ops_tbl)
+#define NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt) \
+ (NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt)->ops)
+#define NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt) \
+ (NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt)->priv)
+
+enum {
+ NBL_PROMISC = 0,
+ NBL_ALLMULTI = 1,
+};
+
+struct nbl_serv_netdev_ops {
+ void *pf_netdev_ops;
+};
+
+struct nbl_service_mgt {
+ struct nbl_common_info *common;
+ struct nbl_dispatch_ops_tbl *disp_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+
+};
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
new file mode 100644
index 000000000000..9561860c161c
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include "nbl_hw_leonis.h"
+
+static struct nbl_hw_ops hw_ops = {
+};
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_hw_setup_hw_mgt(struct nbl_common_info *common,
+ struct nbl_hw_mgt_leonis **hw_mgt_leonis)
+{
+ struct device *dev;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ *hw_mgt_leonis =
+ devm_kzalloc(dev, sizeof(struct nbl_hw_mgt_leonis), GFP_KERNEL);
+ if (!*hw_mgt_leonis)
+ return -ENOMEM;
+
+ (&(*hw_mgt_leonis)->hw_mgt)->common = common;
+
+ return 0;
+}
+
+static void nbl_hw_remove_hw_mgt(struct nbl_common_info *common,
+ struct nbl_hw_mgt_leonis **hw_mgt_leonis)
+{
+ struct device *dev;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ devm_kfree(dev, *hw_mgt_leonis);
+ *hw_mgt_leonis = NULL;
+}
+
+static int nbl_hw_setup_ops(struct nbl_common_info *common,
+ struct nbl_hw_ops_tbl **hw_ops_tbl,
+ struct nbl_hw_mgt_leonis *hw_mgt_leonis)
+{
+ struct device *dev;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ *hw_ops_tbl =
+ devm_kzalloc(dev, sizeof(struct nbl_hw_ops_tbl), GFP_KERNEL);
+ if (!*hw_ops_tbl)
+ return -ENOMEM;
+
+ (*hw_ops_tbl)->ops = &hw_ops;
+ (*hw_ops_tbl)->priv = hw_mgt_leonis;
+
+ return 0;
+}
+
+static void nbl_hw_remove_ops(struct nbl_common_info *common,
+ struct nbl_hw_ops_tbl **hw_ops_tbl)
+{
+ struct device *dev;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ devm_kfree(dev, *hw_ops_tbl);
+ *hw_ops_tbl = NULL;
+}
+
+int nbl_hw_init_leonis(void *p, struct nbl_init_param *param)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_common_info *common;
+ struct pci_dev *pdev;
+ struct nbl_hw_mgt_leonis **hw_mgt_leonis;
+ struct nbl_hw_mgt *hw_mgt;
+ struct nbl_hw_ops_tbl **hw_ops_tbl;
+ int bar_mask;
+ int ret = 0;
+
+ common = NBL_ADAP_TO_COMMON(adapter);
+ hw_mgt_leonis =
+ (struct nbl_hw_mgt_leonis **)&NBL_ADAP_TO_HW_MGT(adapter);
+ hw_ops_tbl = &NBL_ADAP_TO_HW_OPS_TBL(adapter);
+ pdev = NBL_COMMON_TO_PDEV(common);
+
+ ret = nbl_hw_setup_hw_mgt(common, hw_mgt_leonis);
+ if (ret)
+ goto setup_mgt_fail;
+
+ hw_mgt = &(*hw_mgt_leonis)->hw_mgt;
+ bar_mask = BIT(NBL_MEMORY_BAR) | BIT(NBL_MAILBOX_BAR);
+ ret = pci_request_selected_regions(pdev, bar_mask, NBL_DRIVER_NAME);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "Request memory bar and mailbox bar failed, err = %d\n",
+ ret);
+ goto request_bar_region_fail;
+ }
+
+ if (param->caps.has_ctrl) {
+ hw_mgt->hw_addr =
+ ioremap(pci_resource_start(pdev, NBL_MEMORY_BAR),
+ pci_resource_len(pdev, NBL_MEMORY_BAR) -
+ NBL_RDMA_NOTIFY_OFF);
+ if (!hw_mgt->hw_addr) {
+ dev_err(&pdev->dev, "Memory bar ioremap failed\n");
+ ret = -EIO;
+ goto ioremap_err;
+ }
+ hw_mgt->hw_size = pci_resource_len(pdev, NBL_MEMORY_BAR) -
+ NBL_RDMA_NOTIFY_OFF;
+ } else {
+ hw_mgt->hw_addr =
+ ioremap(pci_resource_start(pdev, NBL_MEMORY_BAR),
+ NBL_RDMA_NOTIFY_OFF);
+ if (!hw_mgt->hw_addr) {
+ dev_err(&pdev->dev, "Memory bar ioremap failed\n");
+ ret = -EIO;
+ goto ioremap_err;
+ }
+ hw_mgt->hw_size = NBL_RDMA_NOTIFY_OFF;
+ }
+
+ hw_mgt->notify_offset = 0;
+ hw_mgt->mailbox_bar_hw_addr = pci_ioremap_bar(pdev, NBL_MAILBOX_BAR);
+ if (!hw_mgt->mailbox_bar_hw_addr) {
+ dev_err(&pdev->dev, "Mailbox bar ioremap failed\n");
+ ret = -EIO;
+ goto mailbox_ioremap_err;
+ }
+
+ spin_lock_init(&hw_mgt->reg_lock);
+ hw_mgt->should_lock = true;
+
+ ret = nbl_hw_setup_ops(common, hw_ops_tbl, *hw_mgt_leonis);
+ if (ret)
+ goto setup_ops_fail;
+
+ (*hw_mgt_leonis)->ro_enable = pcie_relaxed_ordering_enabled(pdev);
+ return 0;
+
+setup_ops_fail:
+ iounmap(hw_mgt->mailbox_bar_hw_addr);
+mailbox_ioremap_err:
+ iounmap(hw_mgt->hw_addr);
+ioremap_err:
+ pci_release_selected_regions(pdev, bar_mask);
+request_bar_region_fail:
+ nbl_hw_remove_hw_mgt(common, hw_mgt_leonis);
+setup_mgt_fail:
+ return ret;
+}
+
+void nbl_hw_remove_leonis(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_common_info *common;
+ struct nbl_hw_mgt_leonis **hw_mgt_leonis;
+ struct nbl_hw_ops_tbl **hw_ops_tbl;
+ struct pci_dev *pdev;
+ u8 __iomem *hw_addr;
+ u8 __iomem *mailbox_bar_hw_addr;
+ int bar_mask = BIT(NBL_MEMORY_BAR) | BIT(NBL_MAILBOX_BAR);
+
+ common = NBL_ADAP_TO_COMMON(adapter);
+ hw_mgt_leonis =
+ (struct nbl_hw_mgt_leonis **)&NBL_ADAP_TO_HW_MGT(adapter);
+ hw_ops_tbl = &NBL_ADAP_TO_HW_OPS_TBL(adapter);
+ pdev = NBL_COMMON_TO_PDEV(common);
+
+ hw_addr = (*hw_mgt_leonis)->hw_mgt.hw_addr;
+ mailbox_bar_hw_addr = (*hw_mgt_leonis)->hw_mgt.mailbox_bar_hw_addr;
+
+ iounmap(mailbox_bar_hw_addr);
+ iounmap(hw_addr);
+ pci_release_selected_regions(pdev, bar_mask);
+ nbl_hw_remove_ops(common, hw_ops_tbl);
+ nbl_hw_remove_hw_mgt(common, hw_mgt_leonis);
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h
new file mode 100644
index 000000000000..b078b765f772
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_HW_LEONIS_H_
+#define _NBL_HW_LEONIS_H_
+
+#include "nbl_core.h"
+#include "nbl_hw_reg.h"
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
new file mode 100644
index 000000000000..d08ab16ed493
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
@@ -0,0 +1,152 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+
+#include "nbl_resource_leonis.h"
+
+static void nbl_res_setup_common_ops(struct nbl_resource_mgt *res_mgt)
+{
+}
+
+static struct nbl_resource_ops res_ops = {
+};
+
+static struct nbl_res_product_ops product_ops = {
+};
+
+static int
+nbl_res_setup_res_mgt(struct nbl_common_info *common,
+ struct nbl_resource_mgt_leonis **res_mgt_leonis)
+{
+ struct device *dev;
+ struct nbl_resource_info *resource_info;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ *res_mgt_leonis = devm_kzalloc(dev,
+ sizeof(struct nbl_resource_mgt_leonis),
+ GFP_KERNEL);
+ if (!*res_mgt_leonis)
+ return -ENOMEM;
+ NBL_RES_MGT_TO_COMMON(&(*res_mgt_leonis)->res_mgt) = common;
+
+ resource_info =
+ devm_kzalloc(dev, sizeof(struct nbl_resource_info), GFP_KERNEL);
+ if (!resource_info)
+ return -ENOMEM;
+ NBL_RES_MGT_TO_RES_INFO(&(*res_mgt_leonis)->res_mgt) = resource_info;
+
+ return 0;
+}
+
+static void
+nbl_res_remove_res_mgt(struct nbl_common_info *common,
+ struct nbl_resource_mgt_leonis **res_mgt_leonis)
+{
+ struct device *dev;
+
+ dev = NBL_COMMON_TO_DEV(common);
+ devm_kfree(dev, NBL_RES_MGT_TO_RES_INFO(&(*res_mgt_leonis)->res_mgt));
+ devm_kfree(dev, *res_mgt_leonis);
+ *res_mgt_leonis = NULL;
+}
+
+static void nbl_res_remove_ops(struct device *dev,
+ struct nbl_resource_ops_tbl **res_ops_tbl)
+{
+ devm_kfree(dev, *res_ops_tbl);
+ *res_ops_tbl = NULL;
+}
+
+static int nbl_res_setup_ops(struct device *dev,
+ struct nbl_resource_ops_tbl **res_ops_tbl,
+ struct nbl_resource_mgt_leonis *res_mgt_leonis)
+{
+ *res_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_resource_ops_tbl),
+ GFP_KERNEL);
+ if (!*res_ops_tbl)
+ return -ENOMEM;
+
+ (*res_ops_tbl)->ops = &res_ops;
+ (*res_ops_tbl)->priv = res_mgt_leonis;
+
+ return 0;
+}
+
+static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis)
+{
+}
+
+static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis,
+ struct nbl_func_caps caps)
+{
+ return 0;
+}
+
+int nbl_res_init_leonis(void *p, struct nbl_init_param *param)
+{
+ struct nbl_adapter *adap = (struct nbl_adapter *)p;
+ struct device *dev;
+ struct nbl_common_info *common;
+ struct nbl_resource_mgt_leonis **mgt;
+ struct nbl_resource_ops_tbl **res_ops_tbl;
+ struct nbl_hw_ops_tbl *hw_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ int ret = 0;
+
+ dev = NBL_ADAP_TO_DEV(adap);
+ common = NBL_ADAP_TO_COMMON(adap);
+ mgt =
+ (struct nbl_resource_mgt_leonis **)&NBL_ADAP_TO_RES_MGT(adap);
+ res_ops_tbl = &NBL_ADAP_TO_RES_OPS_TBL(adap);
+ hw_ops_tbl = NBL_ADAP_TO_HW_OPS_TBL(adap);
+ chan_ops_tbl = NBL_ADAP_TO_CHAN_OPS_TBL(adap);
+
+ ret = nbl_res_setup_res_mgt(common, mgt);
+ if (ret)
+ goto setup_mgt_fail;
+
+ nbl_res_setup_common_ops(&(*mgt)->res_mgt);
+ (&(*mgt)->res_mgt)->chan_ops_tbl = chan_ops_tbl;
+ (&(*mgt)->res_mgt)->hw_ops_tbl = hw_ops_tbl;
+
+ (&(*mgt)->res_mgt)->product_ops = &product_ops;
+
+ ret = nbl_res_start(*mgt, param->caps);
+ if (ret)
+ goto start_fail;
+
+ ret = nbl_res_setup_ops(dev, res_ops_tbl, *mgt);
+ if (ret)
+ goto setup_ops_fail;
+
+ return 0;
+
+setup_ops_fail:
+ nbl_res_stop(*mgt);
+start_fail:
+ nbl_res_remove_res_mgt(common, mgt);
+setup_mgt_fail:
+ return ret;
+}
+
+void nbl_res_remove_leonis(void *p)
+{
+ struct nbl_adapter *adap = (struct nbl_adapter *)p;
+ struct device *dev;
+ struct nbl_common_info *common;
+ struct nbl_resource_mgt_leonis **mgt;
+ struct nbl_resource_ops_tbl **res_ops_tbl;
+
+ dev = NBL_ADAP_TO_DEV(adap);
+ common = NBL_ADAP_TO_COMMON(adap);
+ mgt = (struct nbl_resource_mgt_leonis **)&NBL_ADAP_TO_RES_MGT(adap);
+ res_ops_tbl = &NBL_ADAP_TO_RES_OPS_TBL(adap);
+
+ nbl_res_remove_ops(dev, res_ops_tbl);
+ nbl_res_stop(*mgt);
+ nbl_res_remove_res_mgt(common, mgt);
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
new file mode 100644
index 000000000000..80925ff9e680
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_RESOURCE_LEONIS_H_
+#define _NBL_RESOURCE_LEONIS_H_
+
+#include "nbl_resource.h"
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h
new file mode 100644
index 000000000000..b0948a9a0042
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_HW_REG_H_
+#define _NBL_HW_REG_H_
+
+#include "nbl_core.h"
+
+#define NBL_HW_MGT_TO_COMMON(hw_mgt) ((hw_mgt)->common)
+#define NBL_HW_MGT_TO_DEV(hw_mgt) \
+ NBL_COMMON_TO_DEV(NBL_HW_MGT_TO_COMMON(hw_mgt))
+#define NBL_MEMORY_BAR (0)
+#define NBL_MAILBOX_BAR (2)
+#define NBL_RDMA_NOTIFY_OFF (8192)
+#define NBL_HW_DUMMY_REG (0x1300904)
+
+struct nbl_hw_mgt {
+ struct nbl_common_info *common;
+ u8 __iomem *hw_addr;
+ u8 __iomem *mailbox_bar_hw_addr;
+ u64 notify_offset;
+ u32 version;
+ u32 hw_size;
+ spinlock_t reg_lock; /* Protect reg access */
+ bool should_lock;
+};
+
+static inline u32 rd32(u8 __iomem *addr, u64 reg)
+{
+ return readl(addr + (reg));
+}
+
+static inline void wr32_barrier(u8 __iomem *addr, u64 reg, u32 value)
+{
+ writel((value), (addr + (reg)));
+}
+
+static inline void nbl_hw_rd_regs(struct nbl_hw_mgt *hw_mgt, u64 reg,
+ u8 *data, u32 len)
+{
+ u32 size = len / 4;
+ u32 i = 0;
+
+ if (len % 4)
+ return;
+
+ spin_lock(&hw_mgt->reg_lock);
+
+ for (i = 0; i < size; i++)
+ *(u32 *)(data + i * sizeof(u32)) =
+ rd32(hw_mgt->hw_addr, reg + i * sizeof(u32));
+ spin_unlock(&hw_mgt->reg_lock);
+}
+
+static inline void nbl_hw_wr_regs(struct nbl_hw_mgt *hw_mgt,
+ u64 reg, const u8 *data, u32 len)
+{
+ u32 size = len / 4;
+ u32 i = 0;
+
+ if (len % 4)
+ return;
+ spin_lock(&hw_mgt->reg_lock);
+ for (i = 0; i < size; i++)
+ /* Used for emu, make sure that we won't write too frequently */
+ wr32_barrier(hw_mgt->hw_addr, reg + i * sizeof(u32),
+ *(u32 *)(data + i * sizeof(u32)));
+ spin_unlock(&hw_mgt->reg_lock);
+}
+
+static inline void nbl_hw_wr32(struct nbl_hw_mgt *hw_mgt, u64 reg, u32 value)
+{
+ /* Used for emu, make sure that we won't write too frequently */
+ wr32_barrier(hw_mgt->hw_addr, reg, value);
+}
+
+static inline u32 nbl_hw_rd32(struct nbl_hw_mgt *hw_mgt, u64 reg)
+{
+ return rd32(hw_mgt->hw_addr, reg);
+}
+
+static inline void nbl_mbx_wr32(void *priv, u64 reg, u32 value)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+
+ writel((value), ((hw_mgt)->mailbox_bar_hw_addr + (reg)));
+}
+
+static inline void nbl_flush_writes(struct nbl_hw_mgt *hw_mgt)
+{
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+}
+
+static inline u32 nbl_mbx_rd32(void *priv, u64 reg)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+
+ return readl((hw_mgt)->mailbox_bar_hw_addr + (reg));
+}
+
+static inline void nbl_hw_read_mbx_regs(struct nbl_hw_mgt *hw_mgt,
+ u64 reg, u8 *data, u32 len)
+{
+ u32 i = 0;
+
+ if (len % 4)
+ return;
+
+ for (i = 0; i < len / 4; i++)
+ *(u32 *)(data + i * sizeof(u32)) =
+ nbl_mbx_rd32(hw_mgt, reg + i * sizeof(u32));
+}
+
+static inline void nbl_hw_write_mbx_regs(struct nbl_hw_mgt *hw_mgt,
+ u64 reg, const u8 *data, u32 len)
+{
+ u32 i = 0;
+
+ if (len % 4)
+ return;
+
+ for (i = 0; i < len / 4; i++)
+ /* Used for emu, make sure that we won't write too frequently */
+ nbl_mbx_wr32(hw_mgt, reg + i * sizeof(u32),
+ *(u32 *)(data + i * sizeof(u32)));
+}
+
+/* Mgt structure for each product.
+ * Every indivisual mgt must have the common mgt as its first member,
+ * and contains its unique data structure in the reset of it.
+ */
+struct nbl_hw_mgt_leonis {
+ struct nbl_hw_mgt hw_mgt;
+ bool ro_enable;
+};
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
new file mode 100644
index 000000000000..a43f4ca79b2b
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_RESOURCE_H_
+#define _NBL_RESOURCE_H_
+
+#include "nbl_core.h"
+
+#define NBL_RES_MGT_TO_COMMON(res_mgt) ((res_mgt)->common)
+#define NBL_RES_MGT_TO_COMMON_OPS(res_mgt) (&((res_mgt)->common_ops))
+#define NBL_RES_MGT_TO_RES_INFO(res_mgt) ((res_mgt)->resource_info)
+
+#define NBL_RES_MGT_TO_HW_OPS_TBL(res_mgt) ((res_mgt)->hw_ops_tbl)
+#define NBL_RES_MGT_TO_HW_OPS(res_mgt) (NBL_RES_MGT_TO_HW_OPS_TBL(res_mgt)->ops)
+#define NBL_RES_MGT_TO_HW_PRIV(res_mgt) \
+ (NBL_RES_MGT_TO_HW_OPS_TBL(res_mgt)->priv)
+#define NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt) ((res_mgt)->chan_ops_tbl)
+#define NBL_RES_MGT_TO_CHAN_OPS(res_mgt) \
+ (NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->ops)
+#define NBL_RES_MGT_TO_CHAN_PRIV(res_mgt) \
+ (NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->priv)
+
+struct nbl_resource_info {
+};
+
+struct nbl_resource_common_ops {
+};
+
+struct nbl_res_product_ops {
+};
+
+struct nbl_resource_mgt {
+ struct nbl_resource_common_ops common_ops;
+ struct nbl_common_info *common;
+ struct nbl_resource_info *resource_info;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_hw_ops_tbl *hw_ops_tbl;
+ struct nbl_queue_mgt *queue_mgt;
+ struct nbl_interrupt_mgt *intr_mgt;
+ struct nbl_txrx_mgt *txrx_mgt;
+ struct nbl_vsi_mgt *vsi_mgt;
+ struct nbl_res_product_ops *product_ops;
+};
+
+/* Mgt structure for each product.
+ * Every indivisual mgt must have the common mgt as its first member, and
+ * contains its unique data structure in the reset of it.
+ */
+struct nbl_resource_mgt_leonis {
+ struct nbl_resource_mgt res_mgt;
+};
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
new file mode 100644
index 000000000000..f42be0e4e701
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEF_CHANNEL_H_
+#define _NBL_DEF_CHANNEL_H_
+
+#include <linux/if_ether.h>
+#include "nbl_include.h"
+
+#define NBL_CHAN_OPS_TBL_TO_OPS(chan_ops_tbl) ((chan_ops_tbl)->ops)
+#define NBL_CHAN_OPS_TBL_TO_PRIV(chan_ops_tbl) ((chan_ops_tbl)->priv)
+
+enum nbl_channel_type {
+ NBL_CHAN_TYPE_MAILBOX,
+ NBL_CHAN_TYPE_MAX
+};
+
+struct nbl_channel_ops {
+};
+
+struct nbl_channel_ops_tbl {
+ struct nbl_channel_ops *ops;
+ void *priv;
+};
+
+int nbl_chan_init_common(void *p, struct nbl_init_param *param);
+void nbl_chan_remove_common(void *p);
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
new file mode 100644
index 000000000000..7b71f6075ccf
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEF_COMMON_H_
+#define _NBL_DEF_COMMON_H_
+
+#include <linux/netdev_features.h>
+#include "nbl_include.h"
+
+#define nbl_err(common, fmt, ...) \
+do { \
+ typeof(common) _common = (common); \
+ dev_err(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\
+} while (0)
+
+#define nbl_warn(common, fmt, ...) \
+do { \
+ typeof(common) _common = (common); \
+ dev_warn(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\
+} while (0)
+
+#define nbl_info(common, fmt, ...) \
+do { \
+ typeof(common) _common = (common); \
+ dev_info(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\
+} while (0)
+
+#define nbl_debug(common, fmt, ...) \
+do { \
+ typeof(common) _common = (common); \
+ dev_dbg(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\
+} while (0)
+
+#define NBL_COMMON_TO_PDEV(common) ((common)->pdev)
+#define NBL_COMMON_TO_DEV(common) ((common)->dev)
+#define NBL_COMMON_TO_DMA_DEV(common) ((common)->dma_dev)
+#define NBL_COMMON_TO_VSI_ID(common) ((common)->vsi_id)
+#define NBL_COMMON_TO_ETH_ID(common) ((common)->eth_id)
+#define NBL_COMMON_TO_ETH_MODE(common) ((common)->eth_mode)
+#define NBL_COMMON_TO_DEBUG_LVL(common) ((common)->debug_lvl)
+
+#define NBL_COMMON_TO_OCP_CAP(common) ((common)->is_ocp)
+#define NBL_COMMON_TO_PCI_USING_DAC(common) ((common)->pci_using_dac)
+#define NBL_COMMON_TO_MGT_PF(common) ((common)->mgt_pf)
+#define NBL_COMMON_TO_PCI_FUNC_ID(common) ((common)->function)
+#define NBL_COMMON_TO_LOGIC_ETH_ID(common) ((common)->logic_eth_id)
+
+struct nbl_common_info {
+ struct pci_dev *pdev;
+ struct device *dev;
+ struct device *dma_dev;
+ u32 msg_enable;
+ u16 vsi_id;
+ u8 eth_id;
+ u8 logic_eth_id;
+ u8 eth_mode;
+
+ u8 function;
+ u8 devid;
+ u8 bus;
+ u8 hw_bus;
+ u16 mgt_pf;
+
+ bool pci_using_dac;
+ u8 is_ocp;
+ enum nbl_product_type product_type;
+ u8 is_ctrl;
+};
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h
new file mode 100644
index 000000000000..e256eb447a83
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEF_DEV_H_
+#define _NBL_DEF_DEV_H_
+
+#include "nbl_include.h"
+
+#define NBL_DEV_OPS_TBL_TO_OPS(dev_ops_tbl) ((dev_ops_tbl)->ops)
+#define NBL_DEV_OPS_TBL_TO_PRIV(dev_ops_tbl) ((dev_ops_tbl)->priv)
+
+struct nbl_dev_ops {
+};
+
+struct nbl_dev_ops_tbl {
+ struct nbl_dev_ops *ops;
+ void *priv;
+};
+
+int nbl_dev_init(void *p, struct nbl_init_param *param);
+void nbl_dev_remove(void *p);
+int nbl_dev_start(void *p, struct nbl_init_param *param);
+void nbl_dev_stop(void *p);
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h
new file mode 100644
index 000000000000..2f6f1ffedafc
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEF_DISPATCH_H_
+#define _NBL_DEF_DISPATCH_H_
+
+#include "nbl_include.h"
+
+#define NBL_DISP_OPS_TBL_TO_OPS(disp_ops_tbl) ((disp_ops_tbl)->ops)
+#define NBL_DISP_OPS_TBL_TO_PRIV(disp_ops_tbl) ((disp_ops_tbl)->priv)
+
+enum {
+ NBL_DISP_CTRL_LVL_NEVER = 0,
+ NBL_DISP_CTRL_LVL_MGT,
+ NBL_DISP_CTRL_LVL_NET,
+ NBL_DISP_CTRL_LVL_ALWAYS,
+ NBL_DISP_CTRL_LVL_MAX,
+};
+
+struct nbl_dispatch_ops {
+};
+
+struct nbl_dispatch_ops_tbl {
+ struct nbl_dispatch_ops *ops;
+ void *priv;
+};
+
+int nbl_disp_init(void *p, struct nbl_init_param *param);
+void nbl_disp_remove(void *p);
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
new file mode 100644
index 000000000000..6ac72e26ccd6
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEF_HW_H_
+#define _NBL_DEF_HW_H_
+
+#include "nbl_include.h"
+
+struct nbl_hw_ops {
+};
+
+struct nbl_hw_ops_tbl {
+ struct nbl_hw_ops *ops;
+ void *priv;
+};
+
+int nbl_hw_init_leonis(void *p, struct nbl_init_param *param);
+void nbl_hw_remove_leonis(void *p);
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
new file mode 100644
index 000000000000..55cacc94888c
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEF_RESOURCE_H_
+#define _NBL_DEF_RESOURCE_H_
+
+#include "nbl_include.h"
+
+struct nbl_resource_pt_ops {
+ netdev_tx_t (*start_xmit)(struct sk_buff *skb,
+ struct net_device *netdev);
+ int (*napi_poll)(struct napi_struct *napi, int budget);
+};
+
+struct nbl_resource_ops {
+};
+
+struct nbl_resource_ops_tbl {
+ struct nbl_resource_ops *ops;
+ void *priv;
+};
+
+int nbl_res_init_leonis(void *p, struct nbl_init_param *param);
+void nbl_res_remove_leonis(void *p);
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
new file mode 100644
index 000000000000..dc261fda3aa5
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEF_SERVICE_H_
+#define _NBL_DEF_SERVICE_H_
+
+#include "nbl_include.h"
+
+struct nbl_service_ops {
+};
+
+struct nbl_service_ops_tbl {
+ struct nbl_resource_pt_ops pt_ops;
+ struct nbl_service_ops *ops;
+ void *priv;
+};
+
+int nbl_serv_init(void *priv, struct nbl_init_param *param);
+void nbl_serv_remove(void *priv);
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index f12bf019dfee..f58b4dbb0aab 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -8,10 +8,15 @@
#define _NBL_INCLUDE_H_
#include <linux/types.h>
-
+#include <linux/netdevice.h>
/* ------ Basic definitions ------- */
#define NBL_DRIVER_NAME "nbl_core"
+enum nbl_product_type {
+ NBL_LEONIS_TYPE,
+ NBL_PRODUCT_MAX,
+};
+
struct nbl_func_caps {
u32 has_ctrl:1;
u32 has_net:1;
@@ -20,4 +25,10 @@ struct nbl_func_caps {
u32 rsv:28;
};
+struct nbl_init_param {
+ struct nbl_func_caps caps;
+ enum nbl_product_type product_type;
+ bool pci_using_dac;
+};
+
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h
new file mode 100644
index 000000000000..2f530c6b112c
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_DEF_PRODUCT_BASE_H_
+#define _NBL_DEF_PRODUCT_BASE_H_
+
+#include "nbl_include.h"
+struct nbl_product_base_ops {
+ int (*hw_init)(void *p, struct nbl_init_param *param);
+ void (*hw_remove)(void *p);
+ int (*res_init)(void *p, struct nbl_init_param *param);
+ void (*res_remove)(void *p);
+ int (*chan_init)(void *p, struct nbl_init_param *param);
+ void (*chan_remove)(void *p);
+};
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
index 2e3026fc3250..624e8a75a1ff 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
@@ -7,17 +7,224 @@
#include <linux/aer.h>
#include "nbl_core.h"
+static struct nbl_product_base_ops nbl_product_base_ops[NBL_PRODUCT_MAX] = {
+ {
+ .hw_init = nbl_hw_init_leonis,
+ .hw_remove = nbl_hw_remove_leonis,
+ .res_init = nbl_res_init_leonis,
+ .res_remove = nbl_res_remove_leonis,
+ .chan_init = nbl_chan_init_common,
+ .chan_remove = nbl_chan_remove_common,
+ },
+};
+
+int nbl_core_start(struct nbl_adapter *adapter, struct nbl_init_param *param)
+{
+ int ret = 0;
+
+ ret = nbl_dev_start(adapter, param);
+ return ret;
+}
+
+void nbl_core_stop(struct nbl_adapter *adapter)
+{
+ nbl_dev_stop(adapter);
+}
+
+static void
+nbl_core_setup_product_ops(struct nbl_adapter *adapter,
+ struct nbl_init_param *param,
+ struct nbl_product_base_ops **product_base_ops)
+{
+ adapter->product_base_ops = &nbl_product_base_ops[param->product_type];
+ *product_base_ops = adapter->product_base_ops;
+}
+
+struct nbl_adapter *nbl_core_init(struct pci_dev *pdev,
+ struct nbl_init_param *param)
+{
+ struct nbl_adapter *adapter;
+ struct nbl_common_info *common;
+ struct nbl_product_base_ops *product_base_ops;
+ int ret = 0;
+
+ if (!pdev)
+ return NULL;
+
+ adapter = devm_kzalloc(&pdev->dev, sizeof(struct nbl_adapter),
+ GFP_KERNEL);
+ if (!adapter)
+ return NULL;
+
+ adapter->pdev = pdev;
+ common = NBL_ADAP_TO_COMMON(adapter);
+
+ common->pdev = pdev;
+ common->dev = &pdev->dev;
+ common->dma_dev = &pdev->dev;
+ common->is_ocp = param->caps.is_ocp;
+ common->is_ctrl = param->caps.has_ctrl;
+ common->pci_using_dac = param->pci_using_dac;
+ common->function = PCI_FUNC(pdev->devfn);
+ common->devid = PCI_SLOT(pdev->devfn);
+ common->bus = pdev->bus->number;
+ common->product_type = param->product_type;
+
+ memcpy(&adapter->init_param, param, sizeof(adapter->init_param));
+
+ nbl_core_setup_product_ops(adapter, param, &product_base_ops);
+
+ /*
+ *every product's hw/chan/res layer has a great difference,
+ *so call their own init ops
+ */
+ ret = product_base_ops->hw_init(adapter, param);
+ if (ret)
+ goto hw_init_fail;
+
+ ret = product_base_ops->chan_init(adapter, param);
+ if (ret)
+ goto chan_init_fail;
+
+ ret = product_base_ops->res_init(adapter, param);
+ if (ret)
+ goto res_init_fail;
+
+ ret = nbl_disp_init(adapter, param);
+ if (ret)
+ goto disp_init_fail;
+
+ ret = nbl_serv_init(adapter, param);
+ if (ret)
+ goto serv_init_fail;
+
+ ret = nbl_dev_init(adapter, param);
+ if (ret)
+ goto dev_init_fail;
+ return adapter;
+
+dev_init_fail:
+ nbl_serv_remove(adapter);
+serv_init_fail:
+ nbl_disp_remove(adapter);
+disp_init_fail:
+ product_base_ops->res_remove(adapter);
+res_init_fail:
+ product_base_ops->chan_remove(adapter);
+chan_init_fail:
+ product_base_ops->hw_remove(adapter);
+hw_init_fail:
+ devm_kfree(&pdev->dev, adapter);
+ return NULL;
+}
+
+void nbl_core_remove(struct nbl_adapter *adapter)
+{
+ struct nbl_product_base_ops *product_base_ops;
+ struct device *dev;
+
+ dev = NBL_ADAP_TO_DEV(adapter);
+ product_base_ops = NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter);
+ nbl_dev_remove(adapter);
+ nbl_serv_remove(adapter);
+ nbl_disp_remove(adapter);
+ product_base_ops->res_remove(adapter);
+ product_base_ops->chan_remove(adapter);
+ product_base_ops->hw_remove(adapter);
+ devm_kfree(dev, adapter);
+}
+
+static void nbl_get_func_param(struct pci_dev *pdev, kernel_ulong_t driver_data,
+ struct nbl_init_param *param)
+{
+ param->caps.has_ctrl = NBL_CAP_IS_CTRL(driver_data);
+ param->caps.has_net = NBL_CAP_IS_NET(driver_data);
+ param->caps.is_nic = NBL_CAP_IS_NIC(driver_data);
+ param->caps.is_ocp = NBL_CAP_IS_OCP(driver_data);
+
+ if (NBL_CAP_IS_LEONIS(driver_data))
+ param->product_type = NBL_LEONIS_TYPE;
+
+ /*
+ * Leonis only PF0 has ctrl capability, but PF0's pcie device_id
+ * is same with other PF.So hanle it special.
+ */
+ if (param->product_type == NBL_LEONIS_TYPE &&
+ (PCI_FUNC(pdev->devfn) == 0))
+ param->caps.has_ctrl = 1;
+}
+
static int nbl_probe(struct pci_dev *pdev,
const struct pci_device_id __always_unused *id)
{
struct device *dev = &pdev->dev;
+ struct nbl_adapter *adapter = NULL;
+ struct nbl_init_param param = {{0}};
+ int err;
+ if (pci_enable_device(pdev)) {
+ dev_err(&pdev->dev, "Failed to enable PCI device\n");
+ return -ENODEV;
+ }
+
+ param.pci_using_dac = true;
+ nbl_get_func_param(pdev, id->driver_data, ¶m);
+
+ err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (err) {
+ dev_err(dev, "Configure DMA 64 bit mask failed, err = %d\n",
+ err);
+ param.pci_using_dac = false;
+ err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+ if (err) {
+ dev_err(dev,
+ "Configure DMA 32 bit mask failed, err = %d\n",
+ err);
+ goto configure_dma_err;
+ }
+ }
+
+ pci_set_master(pdev);
+
+ pci_save_state(pdev);
+
+ adapter = nbl_core_init(pdev, ¶m);
+ if (!adapter) {
+ dev_err(dev, "Nbl adapter init fail\n");
+ err = -ENOMEM;
+ goto adapter_init_err;
+ }
+
+ pci_set_drvdata(pdev, adapter);
+ err = nbl_core_start(adapter, ¶m);
+ if (err)
+ goto core_start_err;
dev_dbg(dev, "nbl probe ok!\n");
return 0;
+core_start_err:
+ nbl_core_remove(adapter);
+adapter_init_err:
+ pci_clear_master(pdev);
+configure_dma_err:
+ pci_disable_device(pdev);
+ return err;
}
static void nbl_remove(struct pci_dev *pdev)
{
+ struct nbl_adapter *adapter = pci_get_drvdata(pdev);
+
+ dev_dbg(&pdev->dev, "nbl remove\n");
+ if (!adapter)
+ return;
+ pci_disable_sriov(pdev);
+
+ nbl_core_stop(adapter);
+ nbl_core_remove(adapter);
+
+ pci_clear_master(pdev);
+ pci_disable_device(pdev);
+
dev_dbg(&pdev->dev, "nbl remove OK!\n");
}
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 03/15] net/nebula-matrix: add machine-generated headers and chip definitions
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 02/15] net/nebula-matrix: add our driver architecture illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 04/15] net/nebula-matrix: add channel layer implementation and the corresponding hw interfaces illusion.wang
` (12 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
1. nbl_hw_leonis/base/* are machine generated headers
2. nbl_hw.h/nbl_hw_leonis.h
chip-related reg definitions
3. nbl_hw_leonis_regs.c
P4 configuration that will be invoked during chip initialization
4.nbl_def_channel.h
For compatibility, msg id value and msg structure are fixed.
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/Makefile | 1 +
.../nebula-matrix/nbl/nbl_hw/nbl_hw.h | 172 +
.../nbl_hw/nbl_hw_leonis/base/nbl_datapath.h | 11 +
.../nbl_hw_leonis/base/nbl_datapath_dped.h | 2152 +++++++++
.../nbl_hw_leonis/base/nbl_datapath_dstore.h | 929 ++++
.../nbl_hw_leonis/base/nbl_datapath_ucar.h | 414 ++
.../nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h | 10 +
.../nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h | 665 +++
.../nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h | 1397 ++++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h | 1129 +++++
.../nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c | 3903 +++++++++++++++++
.../nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h | 12 +
.../nbl/nbl_include/nbl_def_channel.h | 338 ++
.../nbl/nbl_include/nbl_include.h | 17 +
14 files changed, 11150 insertions(+)
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
index 5162d1f12b51..4a492f89c5bc 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -7,6 +7,7 @@ obj-$(CONFIG_NBL_CORE) := nbl_core.o
nbl_core-objs += nbl_channel/nbl_channel.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \
nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \
+ nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \
nbl_core/nbl_dispatch.o \
nbl_core/nbl_service.o \
nbl_core/nbl_dev.o \
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h
new file mode 100644
index 000000000000..06e17fa1901e
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_HW_H_
+#define _NBL_HW_H_
+
+#include "nbl_include.h"
+
+#define NBL_MAX_ETHERNET (4)
+
+#define NBL_PT_PP0 0
+#define NBL_PT_LEN 3
+#define NBL_TCAM_TABLE_LEN (64)
+#define NBL_MCC_ID_INVALID U16_MAX
+#define NBL_KT_BYTE_LEN 40
+#define NBL_KT_BYTE_HALF_LEN 20
+
+#define NBL_EM0_PT_HW_UP_TUNNEL_L2 0
+#define NBL_EM0_PT_HW_UP_L2 1
+#define NBL_EM0_PT_HW_DOWN_L2 2
+#define NBL_EM0_PT_HW_UP_LLDP_LACP 3
+#define NBL_EM0_PT_PMD_ND_UPCALL 4
+#define NBL_EM0_PT_HW_L2_UP_MULTI_MCAST 5
+#define NBL_EM0_PT_HW_L3_UP_MULTI_MCAST 6
+#define NBL_EM0_PT_HW_L2_DOWN_MULTI_MCAST 7
+#define NBL_EM0_PT_HW_L3_DOWN_MULTI_MCAST 8
+#define NBL_EM0_PT_HW_DPRBAC_IPV4 9
+#define NBL_EM0_PT_HW_DPRBAC_IPV6 10
+#define NBL_EM0_PT_HW_UL4S_IPV4 11
+#define NBL_EM0_PT_HW_UL4S_IPV6 12
+
+#define NBL_PP0_PROFILE_ID_MIN (0)
+#define NBL_PP0_PROFILE_ID_MAX (15)
+#define NBL_PP1_PROFILE_ID_MIN (16)
+#define NBL_PP1_PROFILE_ID_MAX (31)
+#define NBL_PP2_PROFILE_ID_MIN (32)
+#define NBL_PP2_PROFILE_ID_MAX (47)
+#define NBL_PP_PROFILE_NUM (16)
+
+#define NBL_QID_MAP_TABLE_ENTRIES (4096)
+#define NBL_EPRO_PF_RSS_RET_TBL_DEPTH (4096)
+#define NBL_EPRO_RSS_RET_TBL_DEPTH (8192 * 2)
+#define NBL_EPRO_RSS_ENTRY_SIZE_UNIT (16)
+
+#define NBL_EPRO_PF_RSS_RET_TBL_COUNT (512)
+#define NBL_EPRO_PF_RSS_ENTRY_SIZE (5)
+
+#define NBL_EPRO_RSS_ENTRY_MAX_COUNT (512)
+#define NBL_EPRO_RSS_ENTRY_MAX_SIZE (4)
+
+#define NBL_EPRO_RSS_SK_SIZE 40
+#define NBL_EPRO_RSS_PER_KEY_SIZE 8
+#define NBL_EPRO_RSS_KEY_NUM (NBL_EPRO_RSS_SK_SIZE / NBL_EPRO_RSS_PER_KEY_SIZE)
+
+enum {
+ NBL_HT0,
+ NBL_HT1,
+ NBL_HT_MAX,
+};
+
+enum {
+ NBL_KT_HALF_MODE,
+ NBL_KT_FULL_MODE,
+};
+
+#pragma pack(1)
+union nbl_action_data {
+ union dport_act {
+ struct {
+ /* port_type = SET_DPORT_TYPE_ETH_LAG, set the eth and
+ * lag field.
+ */
+ u16 dport_info:10;
+ u16 dport_type:2;
+ #define FWD_DPORT_TYPE_ETH (0)
+ #define FWD_DPORT_TYPE_LAG (1)
+ #define FWD_DPORT_TYPE_VSI (2)
+ u16 dport_id:4;
+ #define FWD_DPORT_ID_HOST_TLS (0)
+ #define FWD_DPORT_ID_ECPU_TLS (1)
+ #define FWD_DPORT_ID_HOST_RDMA (2)
+ #define FWD_DPORT_ID_ECPU_RDMA (3)
+ #define FWD_DPORT_ID_EMP (4)
+ #define FWD_DPORT_ID_BMC (5)
+ #define FWD_DPORT_ID_LOOP_BACK (7)
+ #define FWD_DPORT_ID_ETH0 (8)
+ #define FWD_DPORT_ID_ETH1 (9)
+ #define FWD_DPORT_ID_ETH2 (10)
+ #define FWD_DPORT_ID_ETH3 (11)
+ } fwd_dport;
+
+ struct {
+ /* port_type = SET_DPORT_TYPE_ETH_LAG,
+ * set the eth and lag field.
+ */
+ u16 eth_id:2;
+ u16 lag_id:2;
+ u16 eth_vld:1;
+ u16 lag_vld:1;
+ u16 rsv:4;
+ u16 port_type:2;
+ u16 next_stg_sel:2;
+ u16 upcall_flag:2;
+ } down;
+
+ struct {
+ /* port_type = SET_DPORT_TYPE_VSI_HOST and
+ * SET_DPORT_TYPE_VSI_ECPU,
+ * set the port_id field as the vsi_id.
+ * port_type = SET_DPORT_TYPE_SP_PORT, set the port_id
+ * as the defined PORT_TYPE_SP_*.
+ */
+ u16 port_id:10;
+ #define PORT_TYPE_SP_DROP (0x3FF)
+ #define PORT_TYPE_SP_GLB_LB (0x3FE)
+ #define PORT_TYPE_SP_BMC (0x3FD)
+ #define PORT_TYPE_SP_EMP (0x3FC)
+ u16 port_type:2;
+ #define SET_DPORT_TYPE_VSI_HOST (0)
+ #define SET_DPORT_TYPE_VSI_ECPU (1)
+ #define SET_DPORT_TYPE_ETH_LAG (2)
+ #define SET_DPORT_TYPE_SP_PORT (3)
+ u16 next_stg_sel:2;
+ #define NEXT_STG_SEL_NONE (0)
+ #define NEXT_STG_SEL_ACL_S0 (1)
+ #define NEXT_STG_SEL_EPRO (2)
+ #define NEXT_STG_SEL_BYPASS (3)
+ u16 upcall_flag:2;
+ #define AUX_KEEP_FWD_TYPE (0)
+ #define AUX_FWD_TYPE_NML_FWD (1)
+ #define AUX_FWD_TYPE_UPCALL (2)
+ } up;
+ } dport;
+
+ struct dqueue_act {
+ u16 que_id:11;
+ u16 rsv:5;
+ } dqueue;
+
+ struct mcc_id_act {
+ u16 mcc_id:13;
+ u16 pri:1;
+ #define NBL_MCC_PRI_HIGH (0)
+ #define NBL_MCC_PRI_LOW (1)
+ u16 rsv:2;
+ } mcc_idx;
+
+ struct set_aux_act {
+ u16 nstg_val:4;
+ u16 nstg_vld:1;
+ u16 ftype_val:3;
+ u16 ftype_vld:1;
+ u16 pkt_cos_val:3;
+ u16 pcos_vld:1;
+ u16 rsv:1;
+ #define NBL_SET_AUX_CLR_FLG (0)
+ #define NBL_SET_AUX_SET_FLG (1)
+ #define NBL_SET_AUX_SET_AUX (2)
+ u16 sub_id:2;
+ } set_aux;
+
+ u16 data;
+};
+
+#pragma pack()
+
+#define NBL_SPORT_ETH_OFFSET 8
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h
new file mode 100644
index 000000000000..87a0f432cbd5
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+// Code generated by interstellar. DO NOT EDIT.
+// Compatible with leonis RTL tag 0710
+
+#include "nbl_datapath_ucar.h"
+#include "nbl_datapath_dped.h"
+#include "nbl_datapath_dstore.h"
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h
new file mode 100644
index 000000000000..2715ce4ae32a
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h
@@ -0,0 +1,2152 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+ // Code generated by interstellar. DO NOT EDIT.
+// Compatible with leonis RTL tag 0710
+
+#ifndef NBL_DPED_H
+#define NBL_DPED_H 1
+
+#include <linux/types.h>
+
+#define NBL_DPED_BASE (0x0075C000)
+
+#define NBL_DPED_INT_STATUS_ADDR (0x75c000)
+#define NBL_DPED_INT_STATUS_DEPTH (1)
+#define NBL_DPED_INT_STATUS_WIDTH (32)
+#define NBL_DPED_INT_STATUS_DWLEN (1)
+union dped_int_status_u {
+ struct dped_int_status {
+ u32 pkt_length_err:1; /* [0] Default:0x0 RWC */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 RWC */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 RWC */
+ u32 fsm_err:1; /* [3] Default:0x0 RWC */
+ u32 cif_err:1; /* [4] Default:0x0 RWC */
+ u32 input_err:1; /* [5] Default:0x0 RWC */
+ u32 cfg_err:1; /* [6] Default:0x0 RWC */
+ u32 data_ucor_err:1; /* [7] Default:0x0 RWC */
+ u32 inmeta_ucor_err:1; /* [8] Default:0x0 RWC */
+ u32 meta_ucor_err:1; /* [9] Default:0x0 RWC */
+ u32 meta_cor_ecc_err:1; /* [10] Default:0x0 RWC */
+ u32 fwd_atid_nomat_err:1; /* [11] Default:0x0 RWC */
+ u32 meta_value_err:1; /* [12] Default:0x0 RWC */
+ u32 edit_atnum_err:1; /* [13] Default:0x0 RWC */
+ u32 header_oft_ovf:1; /* [14] Default:0x0 RWC */
+ u32 edit_pos_err:1; /* [15] Default:0x0 RWC */
+ u32 da_oft_len_ovf:1; /* [16] Default:0x0 RWC */
+ u32 lxoffset_ovf:1; /* [17] Default:0x0 RWC */
+ u32 add_head_ovf:1; /* [18] Default:0x0 RWC */
+ u32 rsv:13; /* [31:19] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_INT_STATUS_DWLEN];
+} __packed;
+
+#define NBL_DPED_INT_MASK_ADDR (0x75c004)
+#define NBL_DPED_INT_MASK_DEPTH (1)
+#define NBL_DPED_INT_MASK_WIDTH (32)
+#define NBL_DPED_INT_MASK_DWLEN (1)
+union dped_int_mask_u {
+ struct dped_int_mask {
+ u32 pkt_length_err:1; /* [0] Default:0x0 RW */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 RW */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 RW */
+ u32 fsm_err:1; /* [3] Default:0x0 RW */
+ u32 cif_err:1; /* [4] Default:0x0 RW */
+ u32 input_err:1; /* [5] Default:0x0 RW */
+ u32 cfg_err:1; /* [6] Default:0x0 RW */
+ u32 data_ucor_err:1; /* [7] Default:0x0 RW */
+ u32 inmeta_ucor_err:1; /* [8] Default:0x0 RW */
+ u32 meta_ucor_err:1; /* [9] Default:0x0 RW */
+ u32 meta_cor_ecc_err:1; /* [10] Default:0x0 RW */
+ u32 fwd_atid_nomat_err:1; /* [11] Default:0x1 RW */
+ u32 meta_value_err:1; /* [12] Default:0x0 RW */
+ u32 edit_atnum_err:1; /* [13] Default:0x0 RW */
+ u32 header_oft_ovf:1; /* [14] Default:0x0 RW */
+ u32 edit_pos_err:1; /* [15] Default:0x0 RW */
+ u32 da_oft_len_ovf:1; /* [16] Default:0x0 RW */
+ u32 lxoffset_ovf:1; /* [17] Default:0x0 RW */
+ u32 add_head_ovf:1; /* [18] Default:0x0 RW */
+ u32 rsv:13; /* [31:19] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_INT_MASK_DWLEN];
+} __packed;
+
+#define NBL_DPED_INT_SET_ADDR (0x75c008)
+#define NBL_DPED_INT_SET_DEPTH (1)
+#define NBL_DPED_INT_SET_WIDTH (32)
+#define NBL_DPED_INT_SET_DWLEN (1)
+union dped_int_set_u {
+ struct dped_int_set {
+ u32 pkt_length_err:1; /* [0] Default:0x0 WO */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 WO */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 WO */
+ u32 fsm_err:1; /* [3] Default:0x0 WO */
+ u32 cif_err:1; /* [4] Default:0x0 WO */
+ u32 input_err:1; /* [5] Default:0x0 WO */
+ u32 cfg_err:1; /* [6] Default:0x0 WO */
+ u32 data_ucor_err:1; /* [7] Default:0x0 WO */
+ u32 inmeta_ucor_err:1; /* [8] Default:0x0 WO */
+ u32 meta_ucor_err:1; /* [9] Default:0x0 WO */
+ u32 meta_cor_ecc_err:1; /* [10] Default:0x0 WO */
+ u32 fwd_atid_nomat_err:1; /* [11] Default:0x0 WO */
+ u32 meta_value_err:1; /* [12] Default:0x0 WO */
+ u32 edit_atnum_err:1; /* [13] Default:0x0 WO */
+ u32 header_oft_ovf:1; /* [14] Default:0x0 WO */
+ u32 edit_pos_err:1; /* [15] Default:0x0 WO */
+ u32 da_oft_len_ovf:1; /* [16] Default:0x0 WO */
+ u32 lxoffset_ovf:1; /* [17] Default:0x0 WO */
+ u32 add_head_ovf:1; /* [18] Default:0x0 WO */
+ u32 rsv:13; /* [31:19] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_INT_SET_DWLEN];
+} __packed;
+
+#define NBL_DPED_INIT_DONE_ADDR (0x75c00c)
+#define NBL_DPED_INIT_DONE_DEPTH (1)
+#define NBL_DPED_INIT_DONE_WIDTH (32)
+#define NBL_DPED_INIT_DONE_DWLEN (1)
+union dped_init_done_u {
+ struct dped_init_done {
+ u32 done:1; /* [00:00] Default:0x0 RO */
+ u32 rsv:31; /* [31:01] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_INIT_DONE_DWLEN];
+} __packed;
+
+#define NBL_DPED_PKT_LENGTH_ERR_INFO_ADDR (0x75c020)
+#define NBL_DPED_PKT_LENGTH_ERR_INFO_DEPTH (1)
+#define NBL_DPED_PKT_LENGTH_ERR_INFO_WIDTH (32)
+#define NBL_DPED_PKT_LENGTH_ERR_INFO_DWLEN (1)
+union dped_pkt_length_err_info_u {
+ struct dped_pkt_length_err_info {
+ u32 ptr_eop:1; /* [0] Default:0x0 RC */
+ u32 pkt_eop:1; /* [1] Default:0x0 RC */
+ u32 pkt_mod:1; /* [2] Default:0x0 RC */
+ u32 rsv:29; /* [31:3] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_PKT_LENGTH_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_CIF_ERR_INFO_ADDR (0x75c040)
+#define NBL_DPED_CIF_ERR_INFO_DEPTH (1)
+#define NBL_DPED_CIF_ERR_INFO_WIDTH (32)
+#define NBL_DPED_CIF_ERR_INFO_DWLEN (1)
+union dped_cif_err_info_u {
+ struct dped_cif_err_info {
+ u32 addr:30; /* [29:0] Default:0x0 RO */
+ u32 wr_err:1; /* [30] Default:0x0 RO */
+ u32 ucor_err:1; /* [31] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_CIF_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_INPUT_ERR_INFO_ADDR (0x75c048)
+#define NBL_DPED_INPUT_ERR_INFO_DEPTH (1)
+#define NBL_DPED_INPUT_ERR_INFO_WIDTH (32)
+#define NBL_DPED_INPUT_ERR_INFO_DWLEN (1)
+union dped_input_err_info_u {
+ struct dped_input_err_info {
+ u32 eoc_miss:1; /* [0] Default:0x0 RC */
+ u32 soc_miss:1; /* [1] Default:0x0 RC */
+ u32 rsv:30; /* [31:2] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_INPUT_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_CFG_ERR_INFO_ADDR (0x75c050)
+#define NBL_DPED_CFG_ERR_INFO_DEPTH (1)
+#define NBL_DPED_CFG_ERR_INFO_WIDTH (32)
+#define NBL_DPED_CFG_ERR_INFO_DWLEN (1)
+union dped_cfg_err_info_u {
+ struct dped_cfg_err_info {
+ u32 length:1; /* [0] Default:0x0 RC */
+ u32 rd_conflict:1; /* [1] Default:0x0 RC */
+ u32 rd_addr:8; /* [9:2] Default:0x0 RC */
+ u32 rsv:22; /* [31:10] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_CFG_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_ADDR (0x75c06c)
+#define NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_DEPTH (1)
+#define NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_WIDTH (32)
+#define NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_DWLEN (1)
+union dped_fwd_atid_nomat_err_info_u {
+ struct dped_fwd_atid_nomat_err_info {
+ u32 dport:1; /* [0] Default:0x0 RC */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_META_VALUE_ERR_INFO_ADDR (0x75c070)
+#define NBL_DPED_META_VALUE_ERR_INFO_DEPTH (1)
+#define NBL_DPED_META_VALUE_ERR_INFO_WIDTH (32)
+#define NBL_DPED_META_VALUE_ERR_INFO_DWLEN (1)
+union dped_meta_value_err_info_u {
+ struct dped_meta_value_err_info {
+ u32 sport:1; /* [0] Default:0x0 RC */
+ u32 dport:1; /* [1] Default:0x0 RC */
+ u32 dscp_ecn:1; /* [2] Default:0x0 RC */
+ u32 tnl:1; /* [3] Default:0x0 RC */
+ u32 vni:1; /* [4] Default:0x0 RC */
+ u32 vni_one:1; /* [5] Default:0x0 RC */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_META_VALUE_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_EDIT_ATNUM_ERR_INFO_ADDR (0x75c078)
+#define NBL_DPED_EDIT_ATNUM_ERR_INFO_DEPTH (1)
+#define NBL_DPED_EDIT_ATNUM_ERR_INFO_WIDTH (32)
+#define NBL_DPED_EDIT_ATNUM_ERR_INFO_DWLEN (1)
+union dped_edit_atnum_err_info_u {
+ struct dped_edit_atnum_err_info {
+ u32 replace:1; /* [0] Default:0x0 RC */
+ u32 del_add:1; /* [1] Default:0x0 RC */
+ u32 ttl:1; /* [2] Default:0x0 RC */
+ u32 dscp:1; /* [3] Default:0x0 RC */
+ u32 tnl:1; /* [4] Default:0x0 RC */
+ u32 sport:1; /* [5] Default:0x0 RC */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_EDIT_ATNUM_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_HEADER_OFT_OVF_ADDR (0x75c080)
+#define NBL_DPED_HEADER_OFT_OVF_DEPTH (1)
+#define NBL_DPED_HEADER_OFT_OVF_WIDTH (32)
+#define NBL_DPED_HEADER_OFT_OVF_DWLEN (1)
+union dped_header_oft_ovf_u {
+ struct dped_header_oft_ovf {
+ u32 replace:1; /* [0] Default:0x0 RC */
+ u32 rsv2:7; /* [7:1] Default:0x0 RO */
+ u32 add_del:6; /* [13:8] Default:0x0 RC */
+ u32 dscp_ecn:1; /* [14] Default:0x0 RC */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 ttl:1; /* [16] Default:0x0 RC */
+ u32 sctp:1; /* [17] Default:0x0 RC */
+ u32 dscp:1; /* [18] Default:0x0 RC */
+ u32 pri:1; /* [19] Default:0x0 RC */
+ u32 len0:1; /* [20] Default:0x0 RC */
+ u32 len1:1; /* [21] Default:0x0 RC */
+ u32 ck0:1; /* [22] Default:0x0 RC */
+ u32 ck1:1; /* [23] Default:0x0 RC */
+ u32 ck_start0_0:1; /* [24] Default:0x0 RC */
+ u32 ck_start0_1:1; /* [25] Default:0x0 RC */
+ u32 ck_start1_0:1; /* [26] Default:0x0 RC */
+ u32 ck_start1_1:1; /* [27] Default:0x0 RC */
+ u32 head:1; /* [28] Default:0x0 RC */
+ u32 ck_len0:1; /* [29] Default:0x0 RC */
+ u32 ck_len1:1; /* [30] Default:0x0 RC */
+ u32 rsv:1; /* [31] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_HEADER_OFT_OVF_DWLEN];
+} __packed;
+
+#define NBL_DPED_EDIT_POS_ERR_ADDR (0x75c088)
+#define NBL_DPED_EDIT_POS_ERR_DEPTH (1)
+#define NBL_DPED_EDIT_POS_ERR_WIDTH (32)
+#define NBL_DPED_EDIT_POS_ERR_DWLEN (1)
+union dped_edit_pos_err_u {
+ struct dped_edit_pos_err {
+ u32 replace:1; /* [0] Default:0x0 RC */
+ u32 cross_level:6; /* [6:1] Default:0x0 RC */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 add_del:6; /* [13:8] Default:0x0 RC */
+ u32 dscp_ecn:1; /* [14] Default:0x0 RC */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 ttl:1; /* [16] Default:0x0 RC */
+ u32 sctp:1; /* [17] Default:0x0 RC */
+ u32 dscp:1; /* [18] Default:0x0 RC */
+ u32 pri:1; /* [19] Default:0x0 RC */
+ u32 len0:1; /* [20] Default:0x0 RC */
+ u32 len1:1; /* [21] Default:0x0 RC */
+ u32 ck0:1; /* [22] Default:0x0 RC */
+ u32 ck1:1; /* [23] Default:0x0 RC */
+ u32 ck_start0_0:1; /* [24] Default:0x0 RC */
+ u32 ck_start0_1:1; /* [25] Default:0x0 RC */
+ u32 ck_start1_0:1; /* [26] Default:0x0 RC */
+ u32 ck_start1_1:1; /* [27] Default:0x0 RC */
+ u32 ck_len0:1; /* [28] Default:0x0 RC */
+ u32 ck_len1:1; /* [29] Default:0x0 RC */
+ u32 rsv:2; /* [31:30] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_EDIT_POS_ERR_DWLEN];
+} __packed;
+
+#define NBL_DPED_DA_OFT_LEN_OVF_ADDR (0x75c090)
+#define NBL_DPED_DA_OFT_LEN_OVF_DEPTH (1)
+#define NBL_DPED_DA_OFT_LEN_OVF_WIDTH (32)
+#define NBL_DPED_DA_OFT_LEN_OVF_DWLEN (1)
+union dped_da_oft_len_ovf_u {
+ struct dped_da_oft_len_ovf {
+ u32 at0:5; /* [4:0] Default:0x0 RC */
+ u32 at1:5; /* [9:5] Default:0x0 RC */
+ u32 at2:5; /* [14:10] Default:0x0 RC */
+ u32 at3:5; /* [19:15] Default:0x0 RC */
+ u32 at4:5; /* [24:20] Default:0x0 RC */
+ u32 at5:5; /* [29:25] Default:0x0 RC */
+ u32 rsv:2; /* [31:30] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_DA_OFT_LEN_OVF_DWLEN];
+} __packed;
+
+#define NBL_DPED_LXOFFSET_OVF_ADDR (0x75c098)
+#define NBL_DPED_LXOFFSET_OVF_DEPTH (1)
+#define NBL_DPED_LXOFFSET_OVF_WIDTH (32)
+#define NBL_DPED_LXOFFSET_OVF_DWLEN (1)
+union dped_lxoffset_ovf_u {
+ struct dped_lxoffset_ovf {
+ u32 l2:1; /* [0] Default:0x0 RC */
+ u32 l3:1; /* [1] Default:0x0 RC */
+ u32 l4:1; /* [2] Default:0x0 RC */
+ u32 pld:1; /* [3] Default:0x0 RC */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_LXOFFSET_OVF_DWLEN];
+} __packed;
+
+#define NBL_DPED_ADD_HEAD_OVF_ADDR (0x75c0a0)
+#define NBL_DPED_ADD_HEAD_OVF_DEPTH (1)
+#define NBL_DPED_ADD_HEAD_OVF_WIDTH (32)
+#define NBL_DPED_ADD_HEAD_OVF_DWLEN (1)
+union dped_add_head_ovf_u {
+ struct dped_add_head_ovf {
+ u32 tnl_l2:1; /* [0] Default:0x0 RC */
+ u32 tnl_pkt:1; /* [1] Default:0x0 RC */
+ u32 rsv1:14; /* [15:2] Default:0x0 RO */
+ u32 mir_l2:1; /* [16] Default:0x0 RC */
+ u32 mir_pkt:1; /* [17] Default:0x0 RC */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_ADD_HEAD_OVF_DWLEN];
+} __packed;
+
+#define NBL_DPED_CAR_CTRL_ADDR (0x75c100)
+#define NBL_DPED_CAR_CTRL_DEPTH (1)
+#define NBL_DPED_CAR_CTRL_WIDTH (32)
+#define NBL_DPED_CAR_CTRL_DWLEN (1)
+union dped_car_ctrl_u {
+ struct dped_car_ctrl {
+ u32 sctr_car:1; /* [00:00] Default:0x1 RW */
+ u32 rctr_car:1; /* [01:01] Default:0x1 RW */
+ u32 rc_car:1; /* [02:02] Default:0x1 RW */
+ u32 tbl_rc_car:1; /* [03:03] Default:0x1 RW */
+ u32 rsv:28; /* [31:04] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_CAR_CTRL_DWLEN];
+} __packed;
+
+#define NBL_DPED_INIT_START_ADDR (0x75c10c)
+#define NBL_DPED_INIT_START_DEPTH (1)
+#define NBL_DPED_INIT_START_WIDTH (32)
+#define NBL_DPED_INIT_START_DWLEN (1)
+union dped_init_start_u {
+ struct dped_init_start {
+ u32 start:1; /* [00:00] Default:0x0 WO */
+ u32 rsv:31; /* [31:01] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_INIT_START_DWLEN];
+} __packed;
+
+#define NBL_DPED_TIMEOUT_CFG_ADDR (0x75c110)
+#define NBL_DPED_TIMEOUT_CFG_DEPTH (1)
+#define NBL_DPED_TIMEOUT_CFG_WIDTH (32)
+#define NBL_DPED_TIMEOUT_CFG_DWLEN (1)
+union dped_timeout_cfg_u {
+ struct dped_timeout_cfg {
+ u32 fsm_max_num:16; /* [15:00] Default:0xfff RW */
+ u32 tab:8; /* [23:16] Default:0x40 RW */
+ u32 rsv:8; /* [31:24] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TIMEOUT_CFG_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_MAX_LENGTH_ADDR (0x75c154)
+#define NBL_DPED_TNL_MAX_LENGTH_DEPTH (1)
+#define NBL_DPED_TNL_MAX_LENGTH_WIDTH (32)
+#define NBL_DPED_TNL_MAX_LENGTH_DWLEN (1)
+union dped_tnl_max_length_u {
+ struct dped_tnl_max_length {
+ u32 th:7; /* [6:0] Default:0x5A RW */
+ u32 rsv:25; /* [31:7] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_MAX_LENGTH_DWLEN];
+} __packed;
+
+#define NBL_DPED_PKT_DROP_EN_ADDR (0x75c170)
+#define NBL_DPED_PKT_DROP_EN_DEPTH (1)
+#define NBL_DPED_PKT_DROP_EN_WIDTH (32)
+#define NBL_DPED_PKT_DROP_EN_DWLEN (1)
+union dped_pkt_drop_en_u {
+ struct dped_pkt_drop_en {
+ u32 en:1; /* [0] Default:0x1 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_PKT_DROP_EN_DWLEN];
+} __packed;
+
+#define NBL_DPED_PKT_HERR_DROP_EN_ADDR (0x75c174)
+#define NBL_DPED_PKT_HERR_DROP_EN_DEPTH (1)
+#define NBL_DPED_PKT_HERR_DROP_EN_WIDTH (32)
+#define NBL_DPED_PKT_HERR_DROP_EN_DWLEN (1)
+union dped_pkt_herr_drop_en_u {
+ struct dped_pkt_herr_drop_en {
+ u32 en:1; /* [0] Default:0x1 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_PKT_HERR_DROP_EN_DWLEN];
+} __packed;
+
+#define NBL_DPED_PKT_PARITY_DROP_EN_ADDR (0x75c178)
+#define NBL_DPED_PKT_PARITY_DROP_EN_DEPTH (1)
+#define NBL_DPED_PKT_PARITY_DROP_EN_WIDTH (32)
+#define NBL_DPED_PKT_PARITY_DROP_EN_DWLEN (1)
+union dped_pkt_parity_drop_en_u {
+ struct dped_pkt_parity_drop_en {
+ u32 en0:1; /* [0] Default:0x1 RW */
+ u32 en1:1; /* [1] Default:0x1 RW */
+ u32 rsv:30; /* [31:2] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_PKT_PARITY_DROP_EN_DWLEN];
+} __packed;
+
+#define NBL_DPED_TTL_DROP_EN_ADDR (0x75c17c)
+#define NBL_DPED_TTL_DROP_EN_DEPTH (1)
+#define NBL_DPED_TTL_DROP_EN_WIDTH (32)
+#define NBL_DPED_TTL_DROP_EN_DWLEN (1)
+union dped_ttl_drop_en_u {
+ struct dped_ttl_drop_en {
+ u32 en:1; /* [0] Default:0x1 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TTL_DROP_EN_DWLEN];
+} __packed;
+
+#define NBL_DPED_TTL_ERROR_CODE_ADDR (0x75c188)
+#define NBL_DPED_TTL_ERROR_CODE_DEPTH (1)
+#define NBL_DPED_TTL_ERROR_CODE_WIDTH (32)
+#define NBL_DPED_TTL_ERROR_CODE_DWLEN (1)
+union dped_ttl_error_code_u {
+ struct dped_ttl_error_code {
+ u32 en:1; /* [0] Default:0x1 RW */
+ u32 rsv1:7; /* [7:1] Default:0x0 RO */
+ u32 id:4; /* [11:8] Default:0x6 RW */
+ u32 rsv:20; /* [31:12] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TTL_ERROR_CODE_DWLEN];
+} __packed;
+
+#define NBL_DPED_HIGH_PRI_PKT_EN_ADDR (0x75c190)
+#define NBL_DPED_HIGH_PRI_PKT_EN_DEPTH (1)
+#define NBL_DPED_HIGH_PRI_PKT_EN_WIDTH (32)
+#define NBL_DPED_HIGH_PRI_PKT_EN_DWLEN (1)
+union dped_high_pri_pkt_en_u {
+ struct dped_high_pri_pkt_en {
+ u32 en:1; /* [0] Default:0x1 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_HIGH_PRI_PKT_EN_DWLEN];
+} __packed;
+
+#define NBL_DPED_PADDING_CFG_ADDR (0x75c194)
+#define NBL_DPED_PADDING_CFG_DEPTH (1)
+#define NBL_DPED_PADDING_CFG_WIDTH (32)
+#define NBL_DPED_PADDING_CFG_DWLEN (1)
+union dped_padding_cfg_u {
+ struct dped_padding_cfg {
+ u32 th:6; /* [5:0] Default:0x3B RW */
+ u32 rsv1:2; /* [7:6] Default:0x0 RO */
+ u32 mode:2; /* [9:8] Default:0x0 RW */
+ u32 rsv:22; /* [31:10] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_PADDING_CFG_DWLEN];
+} __packed;
+
+#define NBL_DPED_HW_EDIT_FLAG_SEL0_ADDR (0x75c204)
+#define NBL_DPED_HW_EDIT_FLAG_SEL0_DEPTH (1)
+#define NBL_DPED_HW_EDIT_FLAG_SEL0_WIDTH (32)
+#define NBL_DPED_HW_EDIT_FLAG_SEL0_DWLEN (1)
+union dped_hw_edit_flag_sel0_u {
+ struct dped_hw_edit_flag_sel0 {
+ u32 oft:5; /* [4:0] Default:0x1 RW */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_HW_EDIT_FLAG_SEL0_DWLEN];
+} __packed;
+
+#define NBL_DPED_HW_EDIT_FLAG_SEL1_ADDR (0x75c208)
+#define NBL_DPED_HW_EDIT_FLAG_SEL1_DEPTH (1)
+#define NBL_DPED_HW_EDIT_FLAG_SEL1_WIDTH (32)
+#define NBL_DPED_HW_EDIT_FLAG_SEL1_DWLEN (1)
+union dped_hw_edit_flag_sel1_u {
+ struct dped_hw_edit_flag_sel1 {
+ u32 oft:5; /* [4:0] Default:0x2 RW */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_HW_EDIT_FLAG_SEL1_DWLEN];
+} __packed;
+
+#define NBL_DPED_HW_EDIT_FLAG_SEL2_ADDR (0x75c20c)
+#define NBL_DPED_HW_EDIT_FLAG_SEL2_DEPTH (1)
+#define NBL_DPED_HW_EDIT_FLAG_SEL2_WIDTH (32)
+#define NBL_DPED_HW_EDIT_FLAG_SEL2_DWLEN (1)
+union dped_hw_edit_flag_sel2_u {
+ struct dped_hw_edit_flag_sel2 {
+ u32 oft:5; /* [4:0] Default:0x3 RW */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_HW_EDIT_FLAG_SEL2_DWLEN];
+} __packed;
+
+#define NBL_DPED_HW_EDIT_FLAG_SEL3_ADDR (0x75c210)
+#define NBL_DPED_HW_EDIT_FLAG_SEL3_DEPTH (1)
+#define NBL_DPED_HW_EDIT_FLAG_SEL3_WIDTH (32)
+#define NBL_DPED_HW_EDIT_FLAG_SEL3_DWLEN (1)
+union dped_hw_edit_flag_sel3_u {
+ struct dped_hw_edit_flag_sel3 {
+ u32 oft:5; /* [4:0] Default:0x4 RW */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_HW_EDIT_FLAG_SEL3_DWLEN];
+} __packed;
+
+#define NBL_DPED_HW_EDIT_FLAG_SEL4_ADDR (0x75c214)
+#define NBL_DPED_HW_EDIT_FLAG_SEL4_DEPTH (1)
+#define NBL_DPED_HW_EDIT_FLAG_SEL4_WIDTH (32)
+#define NBL_DPED_HW_EDIT_FLAG_SEL4_DWLEN (1)
+union dped_hw_edit_flag_sel4_u {
+ struct dped_hw_edit_flag_sel4 {
+ u32 oft:5; /* [4:0] Default:0xe RW */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_HW_EDIT_FLAG_SEL4_DWLEN];
+} __packed;
+
+#define NBL_DPED_RDMA_FLAG_ADDR (0x75c22c)
+#define NBL_DPED_RDMA_FLAG_DEPTH (1)
+#define NBL_DPED_RDMA_FLAG_WIDTH (32)
+#define NBL_DPED_RDMA_FLAG_DWLEN (1)
+union dped_rdma_flag_u {
+ struct dped_rdma_flag {
+ u32 oft:5; /* [4:0] Default:0xa RW */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_RDMA_FLAG_DWLEN];
+} __packed;
+
+#define NBL_DPED_FWD_DPORT_ADDR (0x75c230)
+#define NBL_DPED_FWD_DPORT_DEPTH (1)
+#define NBL_DPED_FWD_DPORT_WIDTH (32)
+#define NBL_DPED_FWD_DPORT_DWLEN (1)
+union dped_fwd_dport_u {
+ struct dped_fwd_dport {
+ u32 id:6; /* [5:0] Default:0x9 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_FWD_DPORT_DWLEN];
+} __packed;
+
+#define NBL_DPED_FWD_MIRID_ADDR (0x75c238)
+#define NBL_DPED_FWD_MIRID_DEPTH (1)
+#define NBL_DPED_FWD_MIRID_WIDTH (32)
+#define NBL_DPED_FWD_MIRID_DWLEN (1)
+union dped_fwd_mirid_u {
+ struct dped_fwd_mirid {
+ u32 id:6; /* [5:0] Default:0x8 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_FWD_MIRID_DWLEN];
+} __packed;
+
+#define NBL_DPED_FWD_VNI0_ADDR (0x75c244)
+#define NBL_DPED_FWD_VNI0_DEPTH (1)
+#define NBL_DPED_FWD_VNI0_WIDTH (32)
+#define NBL_DPED_FWD_VNI0_DWLEN (1)
+union dped_fwd_vni0_u {
+ struct dped_fwd_vni0 {
+ u32 id:6; /* [5:0] Default:0xe RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_FWD_VNI0_DWLEN];
+} __packed;
+
+#define NBL_DPED_FWD_VNI1_ADDR (0x75c248)
+#define NBL_DPED_FWD_VNI1_DEPTH (1)
+#define NBL_DPED_FWD_VNI1_WIDTH (32)
+#define NBL_DPED_FWD_VNI1_DWLEN (1)
+union dped_fwd_vni1_u {
+ struct dped_fwd_vni1 {
+ u32 id:6; /* [5:0] Default:0xf RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_FWD_VNI1_DWLEN];
+} __packed;
+
+#define NBL_DPED_FWD_PRI_MDF_ADDR (0x75c250)
+#define NBL_DPED_FWD_PRI_MDF_DEPTH (1)
+#define NBL_DPED_FWD_PRI_MDF_WIDTH (32)
+#define NBL_DPED_FWD_PRI_MDF_DWLEN (1)
+union dped_fwd_pri_mdf_u {
+ struct dped_fwd_pri_mdf {
+ u32 id:6; /* [5:0] Default:0x15 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_FWD_PRI_MDF_DWLEN];
+} __packed;
+
+#define NBL_DPED_VLAN_TYPE0_ADDR (0x75c260)
+#define NBL_DPED_VLAN_TYPE0_DEPTH (1)
+#define NBL_DPED_VLAN_TYPE0_WIDTH (32)
+#define NBL_DPED_VLAN_TYPE0_DWLEN (1)
+union dped_vlan_type0_u {
+ struct dped_vlan_type0 {
+ u32 vau:16; /* [15:0] Default:0x8100 RW */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_VLAN_TYPE0_DWLEN];
+} __packed;
+
+#define NBL_DPED_VLAN_TYPE1_ADDR (0x75c264)
+#define NBL_DPED_VLAN_TYPE1_DEPTH (1)
+#define NBL_DPED_VLAN_TYPE1_WIDTH (32)
+#define NBL_DPED_VLAN_TYPE1_DWLEN (1)
+union dped_vlan_type1_u {
+ struct dped_vlan_type1 {
+ u32 vau:16; /* [15:0] Default:0x88A8 RW */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_VLAN_TYPE1_DWLEN];
+} __packed;
+
+#define NBL_DPED_VLAN_TYPE2_ADDR (0x75c268)
+#define NBL_DPED_VLAN_TYPE2_DEPTH (1)
+#define NBL_DPED_VLAN_TYPE2_WIDTH (32)
+#define NBL_DPED_VLAN_TYPE2_DWLEN (1)
+union dped_vlan_type2_u {
+ struct dped_vlan_type2 {
+ u32 vau:16; /* [15:0] Default:0x9100 RW */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_VLAN_TYPE2_DWLEN];
+} __packed;
+
+#define NBL_DPED_VLAN_TYPE3_ADDR (0x75c26c)
+#define NBL_DPED_VLAN_TYPE3_DEPTH (1)
+#define NBL_DPED_VLAN_TYPE3_WIDTH (32)
+#define NBL_DPED_VLAN_TYPE3_DWLEN (1)
+union dped_vlan_type3_u {
+ struct dped_vlan_type3 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_VLAN_TYPE3_DWLEN];
+} __packed;
+
+#define NBL_DPED_L3_LEN_MDY_CMD_0_ADDR (0x75c300)
+#define NBL_DPED_L3_LEN_MDY_CMD_0_DEPTH (1)
+#define NBL_DPED_L3_LEN_MDY_CMD_0_WIDTH (32)
+#define NBL_DPED_L3_LEN_MDY_CMD_0_DWLEN (1)
+union dped_l3_len_mdy_cmd_0_u {
+ struct dped_l3_len_mdy_cmd_0 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 in_oft:7; /* [14:8] Default:0x2 RW */
+ u32 rsv3:1; /* [15] Default:0x0 RO */
+ u32 phid:2; /* [17:16] Default:0x2 RW */
+ u32 rsv2:2; /* [19:18] Default:0x0 RO */
+ u32 mode:2; /* [21:20] Default:0x2 RW */
+ u32 rsv1:2; /* [23:22] Default:0x0 RO */
+ u32 unit:1; /* [24] Default:0x0 RW */
+ u32 rsv:6; /* [30:25] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L3_LEN_MDY_CMD_0_DWLEN];
+} __packed;
+
+#define NBL_DPED_L3_LEN_MDY_CMD_1_ADDR (0x75c304)
+#define NBL_DPED_L3_LEN_MDY_CMD_1_DEPTH (1)
+#define NBL_DPED_L3_LEN_MDY_CMD_1_WIDTH (32)
+#define NBL_DPED_L3_LEN_MDY_CMD_1_DWLEN (1)
+union dped_l3_len_mdy_cmd_1_u {
+ struct dped_l3_len_mdy_cmd_1 {
+ u32 value:8; /* [7:0] Default:0x28 RW */
+ u32 in_oft:7; /* [14:8] Default:0x4 RW */
+ u32 rsv3:1; /* [15] Default:0x0 RO */
+ u32 phid:2; /* [17:16] Default:0x2 RW */
+ u32 rsv2:2; /* [19:18] Default:0x0 RO */
+ u32 mode:2; /* [21:20] Default:0x1 RW */
+ u32 rsv1:2; /* [23:22] Default:0x0 RO */
+ u32 unit:1; /* [24] Default:0x0 RW */
+ u32 rsv:6; /* [30:25] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L3_LEN_MDY_CMD_1_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_LEN_MDY_CMD_0_ADDR (0x75c308)
+#define NBL_DPED_L4_LEN_MDY_CMD_0_DEPTH (1)
+#define NBL_DPED_L4_LEN_MDY_CMD_0_WIDTH (32)
+#define NBL_DPED_L4_LEN_MDY_CMD_0_DWLEN (1)
+union dped_l4_len_mdy_cmd_0_u {
+ struct dped_l4_len_mdy_cmd_0 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 in_oft:7; /* [14:8] Default:0xc RW */
+ u32 rsv3:1; /* [15] Default:0x0 RO */
+ u32 phid:2; /* [17:16] Default:0x3 RW */
+ u32 rsv2:2; /* [19:18] Default:0x0 RO */
+ u32 mode:2; /* [21:20] Default:0x0 RW */
+ u32 rsv1:2; /* [23:22] Default:0x0 RO */
+ u32 unit:1; /* [24] Default:0x1 RW */
+ u32 rsv:6; /* [30:25] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_LEN_MDY_CMD_0_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_LEN_MDY_CMD_1_ADDR (0x75c30c)
+#define NBL_DPED_L4_LEN_MDY_CMD_1_DEPTH (1)
+#define NBL_DPED_L4_LEN_MDY_CMD_1_WIDTH (32)
+#define NBL_DPED_L4_LEN_MDY_CMD_1_DWLEN (1)
+union dped_l4_len_mdy_cmd_1_u {
+ struct dped_l4_len_mdy_cmd_1 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 in_oft:7; /* [14:8] Default:0x4 RW */
+ u32 rsv3:1; /* [15] Default:0x0 RO */
+ u32 phid:2; /* [17:16] Default:0x3 RW */
+ u32 rsv2:2; /* [19:18] Default:0x0 RO */
+ u32 mode:2; /* [21:20] Default:0x0 RW */
+ u32 rsv1:2; /* [23:22] Default:0x0 RO */
+ u32 unit:1; /* [24] Default:0x1 RW */
+ u32 rsv:6; /* [30:25] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_LEN_MDY_CMD_1_DWLEN];
+} __packed;
+
+#define NBL_DPED_L3_CK_CMD_00_ADDR (0x75c310)
+#define NBL_DPED_L3_CK_CMD_00_DEPTH (1)
+#define NBL_DPED_L3_CK_CMD_00_WIDTH (32)
+#define NBL_DPED_L3_CK_CMD_00_DWLEN (1)
+union dped_l3_ck_cmd_00_u {
+ struct dped_l3_ck_cmd_00 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x0 RW */
+ u32 len_phid:2; /* [16:15] Default:0x0 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x0 RW */
+ u32 in_oft:7; /* [25:19] Default:0xa RW */
+ u32 phid:2; /* [27:26] Default:0x2 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L3_CK_CMD_00_DWLEN];
+} __packed;
+
+#define NBL_DPED_L3_CK_CMD_01_ADDR (0x75c314)
+#define NBL_DPED_L3_CK_CMD_01_DEPTH (1)
+#define NBL_DPED_L3_CK_CMD_01_WIDTH (32)
+#define NBL_DPED_L3_CK_CMD_01_DWLEN (1)
+union dped_l3_ck_cmd_01_u {
+ struct dped_l3_ck_cmd_01 {
+ u32 ck_start0:6; /* [5:0] Default:0x0 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x0 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x0 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L3_CK_CMD_01_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_00_ADDR (0x75c318)
+#define NBL_DPED_L4_CK_CMD_00_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_00_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_00_DWLEN (1)
+union dped_l4_ck_cmd_00_u {
+ struct dped_l4_ck_cmd_00 {
+ u32 value:8; /* [7:0] Default:0x6 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x2 RW */
+ u32 len_phid:2; /* [16:15] Default:0x2 RW */
+ u32 len_vld:1; /* [17] Default:0x1 RW */
+ u32 data_vld:1; /* [18] Default:0x1 RW */
+ u32 in_oft:7; /* [25:19] Default:0x10 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_00_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_01_ADDR (0x75c31c)
+#define NBL_DPED_L4_CK_CMD_01_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_01_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_01_DWLEN (1)
+union dped_l4_ck_cmd_01_u {
+ struct dped_l4_ck_cmd_01 {
+ u32 ck_start0:6; /* [5:0] Default:0xc RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x8 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_01_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_10_ADDR (0x75c320)
+#define NBL_DPED_L4_CK_CMD_10_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_10_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_10_DWLEN (1)
+union dped_l4_ck_cmd_10_u {
+ struct dped_l4_ck_cmd_10 {
+ u32 value:8; /* [7:0] Default:0x11 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x2 RW */
+ u32 len_phid:2; /* [16:15] Default:0x2 RW */
+ u32 len_vld:1; /* [17] Default:0x1 RW */
+ u32 data_vld:1; /* [18] Default:0x1 RW */
+ u32 in_oft:7; /* [25:19] Default:0x6 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x1 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_10_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_11_ADDR (0x75c324)
+#define NBL_DPED_L4_CK_CMD_11_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_11_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_11_DWLEN (1)
+union dped_l4_ck_cmd_11_u {
+ struct dped_l4_ck_cmd_11 {
+ u32 ck_start0:6; /* [5:0] Default:0xc RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x8 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_11_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_20_ADDR (0x75c328)
+#define NBL_DPED_L4_CK_CMD_20_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_20_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_20_DWLEN (1)
+union dped_l4_ck_cmd_20_u {
+ struct dped_l4_ck_cmd_20 {
+ u32 value:8; /* [7:0] Default:0x2e RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x4 RW */
+ u32 len_phid:2; /* [16:15] Default:0x2 RW */
+ u32 len_vld:1; /* [17] Default:0x1 RW */
+ u32 data_vld:1; /* [18] Default:0x1 RW */
+ u32 in_oft:7; /* [25:19] Default:0x10 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_20_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_21_ADDR (0x75c32c)
+#define NBL_DPED_L4_CK_CMD_21_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_21_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_21_DWLEN (1)
+union dped_l4_ck_cmd_21_u {
+ struct dped_l4_ck_cmd_21 {
+ u32 ck_start0:6; /* [5:0] Default:0x8 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x20 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_21_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_30_ADDR (0x75c330)
+#define NBL_DPED_L4_CK_CMD_30_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_30_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_30_DWLEN (1)
+union dped_l4_ck_cmd_30_u {
+ struct dped_l4_ck_cmd_30 {
+ u32 value:8; /* [7:0] Default:0x39 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x4 RW */
+ u32 len_phid:2; /* [16:15] Default:0x2 RW */
+ u32 len_vld:1; /* [17] Default:0x1 RW */
+ u32 data_vld:1; /* [18] Default:0x1 RW */
+ u32 in_oft:7; /* [25:19] Default:0x6 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x1 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_30_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_31_ADDR (0x75c334)
+#define NBL_DPED_L4_CK_CMD_31_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_31_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_31_DWLEN (1)
+union dped_l4_ck_cmd_31_u {
+ struct dped_l4_ck_cmd_31 {
+ u32 ck_start0:6; /* [5:0] Default:0x8 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x20 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_31_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_40_ADDR (0x75c338)
+#define NBL_DPED_L4_CK_CMD_40_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_40_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_40_DWLEN (1)
+union dped_l4_ck_cmd_40_u {
+ struct dped_l4_ck_cmd_40 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x0 RW */
+ u32 len_phid:2; /* [16:15] Default:0x0 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x0 RW */
+ u32 in_oft:7; /* [25:19] Default:0x8 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x1 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_40_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_41_ADDR (0x75c33c)
+#define NBL_DPED_L4_CK_CMD_41_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_41_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_41_DWLEN (1)
+union dped_l4_ck_cmd_41_u {
+ struct dped_l4_ck_cmd_41 {
+ u32 ck_start0:6; /* [5:0] Default:0x0 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x0 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x0 RW */
+ u32 ck_vld0:1; /* [15] Default:0x0 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x0 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_41_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_50_ADDR (0x75c340)
+#define NBL_DPED_L4_CK_CMD_50_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_50_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_50_DWLEN (1)
+union dped_l4_ck_cmd_50_u {
+ struct dped_l4_ck_cmd_50 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x2 RW */
+ u32 len_phid:2; /* [16:15] Default:0x2 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x1 RW */
+ u32 in_oft:7; /* [25:19] Default:0x2 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_50_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_51_ADDR (0x75c344)
+#define NBL_DPED_L4_CK_CMD_51_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_51_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_51_DWLEN (1)
+union dped_l4_ck_cmd_51_u {
+ struct dped_l4_ck_cmd_51 {
+ u32 ck_start0:6; /* [5:0] Default:0xc RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x8 RW */
+ u32 ck_vld0:1; /* [15] Default:0x0 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_51_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_60_ADDR (0x75c348)
+#define NBL_DPED_L4_CK_CMD_60_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_60_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_60_DWLEN (1)
+union dped_l4_ck_cmd_60_u {
+ struct dped_l4_ck_cmd_60 {
+ u32 value:8; /* [7:0] Default:0x62 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x4 RW */
+ u32 len_phid:2; /* [16:15] Default:0x2 RW */
+ u32 len_vld:1; /* [17] Default:0x1 RW */
+ u32 data_vld:1; /* [18] Default:0x1 RW */
+ u32 in_oft:7; /* [25:19] Default:0x2 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_60_DWLEN];
+} __packed;
+
+#define NBL_DPED_L4_CK_CMD_61_ADDR (0x75c34c)
+#define NBL_DPED_L4_CK_CMD_61_DEPTH (1)
+#define NBL_DPED_L4_CK_CMD_61_WIDTH (32)
+#define NBL_DPED_L4_CK_CMD_61_DWLEN (1)
+union dped_l4_ck_cmd_61_u {
+ struct dped_l4_ck_cmd_61 {
+ u32 ck_start0:6; /* [5:0] Default:0x0 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x0 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x0 RW */
+ u32 ck_vld0:1; /* [15] Default:0x0 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x0 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_L4_CK_CMD_61_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L3_CK_CMD_00_ADDR (0x75c350)
+#define NBL_DPED_TNL_L3_CK_CMD_00_DEPTH (1)
+#define NBL_DPED_TNL_L3_CK_CMD_00_WIDTH (32)
+#define NBL_DPED_TNL_L3_CK_CMD_00_DWLEN (1)
+union dped_tnl_l3_ck_cmd_00_u {
+ struct dped_tnl_l3_ck_cmd_00 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x0 RW */
+ u32 len_phid:2; /* [16:15] Default:0x0 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x0 RW */
+ u32 in_oft:7; /* [25:19] Default:0xa RW */
+ u32 phid:2; /* [27:26] Default:0x2 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L3_CK_CMD_00_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L3_CK_CMD_01_ADDR (0x75c354)
+#define NBL_DPED_TNL_L3_CK_CMD_01_DEPTH (1)
+#define NBL_DPED_TNL_L3_CK_CMD_01_WIDTH (32)
+#define NBL_DPED_TNL_L3_CK_CMD_01_DWLEN (1)
+union dped_tnl_l3_ck_cmd_01_u {
+ struct dped_tnl_l3_ck_cmd_01 {
+ u32 ck_start0:6; /* [5:0] Default:0x0 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x0 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x0 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L3_CK_CMD_01_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_00_ADDR (0x75c360)
+#define NBL_DPED_TNL_L4_CK_CMD_00_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_00_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_00_DWLEN (1)
+union dped_tnl_l4_ck_cmd_00_u {
+ struct dped_tnl_l4_ck_cmd_00 {
+ u32 value:8; /* [7:0] Default:0x11 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x2 RW */
+ u32 len_phid:2; /* [16:15] Default:0x2 RW */
+ u32 len_vld:1; /* [17] Default:0x1 RW */
+ u32 data_vld:1; /* [18] Default:0x1 RW */
+ u32 in_oft:7; /* [25:19] Default:0x6 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x1 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_00_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_01_ADDR (0x75c364)
+#define NBL_DPED_TNL_L4_CK_CMD_01_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_01_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_01_DWLEN (1)
+union dped_tnl_l4_ck_cmd_01_u {
+ struct dped_tnl_l4_ck_cmd_01 {
+ u32 ck_start0:6; /* [5:0] Default:0xc RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x8 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_01_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_10_ADDR (0x75c368)
+#define NBL_DPED_TNL_L4_CK_CMD_10_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_10_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_10_DWLEN (1)
+union dped_tnl_l4_ck_cmd_10_u {
+ struct dped_tnl_l4_ck_cmd_10 {
+ u32 value:8; /* [7:0] Default:0x39 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x4 RW */
+ u32 len_phid:2; /* [16:15] Default:0x2 RW */
+ u32 len_vld:1; /* [17] Default:0x1 RW */
+ u32 data_vld:1; /* [18] Default:0x1 RW */
+ u32 in_oft:7; /* [25:19] Default:0x6 RW */
+ u32 phid:2; /* [27:26] Default:0x3 RW */
+ u32 flag:1; /* [28] Default:0x1 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_10_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_11_ADDR (0x75c36c)
+#define NBL_DPED_TNL_L4_CK_CMD_11_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_11_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_11_DWLEN (1)
+union dped_tnl_l4_ck_cmd_11_u {
+ struct dped_tnl_l4_ck_cmd_11 {
+ u32 ck_start0:6; /* [5:0] Default:0x8 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x20 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x0 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_11_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_20_ADDR (0x75c370)
+#define NBL_DPED_TNL_L4_CK_CMD_20_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_20_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_20_DWLEN (1)
+union dped_tnl_l4_ck_cmd_20_u {
+ struct dped_tnl_l4_ck_cmd_20 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x0 RW */
+ u32 len_phid:2; /* [16:15] Default:0x0 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x0 RW */
+ u32 in_oft:7; /* [25:19] Default:0x0 RW */
+ u32 phid:2; /* [27:26] Default:0x0 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_20_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_21_ADDR (0x75c374)
+#define NBL_DPED_TNL_L4_CK_CMD_21_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_21_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_21_DWLEN (1)
+union dped_tnl_l4_ck_cmd_21_u {
+ struct dped_tnl_l4_ck_cmd_21 {
+ u32 ck_start0:6; /* [5:0] Default:0x8 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x20 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x14 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_21_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_30_ADDR (0x75c378)
+#define NBL_DPED_TNL_L4_CK_CMD_30_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_30_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_30_DWLEN (1)
+union dped_tnl_l4_ck_cmd_30_u {
+ struct dped_tnl_l4_ck_cmd_30 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x0 RW */
+ u32 len_phid:2; /* [16:15] Default:0x0 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x0 RW */
+ u32 in_oft:7; /* [25:19] Default:0x0 RW */
+ u32 phid:2; /* [27:26] Default:0x0 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_30_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_31_ADDR (0x75c37c)
+#define NBL_DPED_TNL_L4_CK_CMD_31_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_31_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_31_DWLEN (1)
+union dped_tnl_l4_ck_cmd_31_u {
+ struct dped_tnl_l4_ck_cmd_31 {
+ u32 ck_start0:6; /* [5:0] Default:0x8 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x20 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x8 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_31_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_40_ADDR (0x75c380)
+#define NBL_DPED_TNL_L4_CK_CMD_40_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_40_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_40_DWLEN (1)
+union dped_tnl_l4_ck_cmd_40_u {
+ struct dped_tnl_l4_ck_cmd_40 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x0 RW */
+ u32 len_phid:2; /* [16:15] Default:0x0 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x0 RW */
+ u32 in_oft:7; /* [25:19] Default:0x0 RW */
+ u32 phid:2; /* [27:26] Default:0x0 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_40_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_41_ADDR (0x75c384)
+#define NBL_DPED_TNL_L4_CK_CMD_41_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_41_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_41_DWLEN (1)
+union dped_tnl_l4_ck_cmd_41_u {
+ struct dped_tnl_l4_ck_cmd_41 {
+ u32 ck_start0:6; /* [5:0] Default:0x8 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x20 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x8 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_41_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_50_ADDR (0x75c388)
+#define NBL_DPED_TNL_L4_CK_CMD_50_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_50_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_50_DWLEN (1)
+union dped_tnl_l4_ck_cmd_50_u {
+ struct dped_tnl_l4_ck_cmd_50 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x0 RW */
+ u32 len_phid:2; /* [16:15] Default:0x0 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x0 RW */
+ u32 in_oft:7; /* [25:19] Default:0x0 RW */
+ u32 phid:2; /* [27:26] Default:0x0 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_50_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_51_ADDR (0x75c38c)
+#define NBL_DPED_TNL_L4_CK_CMD_51_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_51_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_51_DWLEN (1)
+union dped_tnl_l4_ck_cmd_51_u {
+ struct dped_tnl_l4_ck_cmd_51 {
+ u32 ck_start0:6; /* [5:0] Default:0x8 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x20 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x8 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_51_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_60_ADDR (0x75c390)
+#define NBL_DPED_TNL_L4_CK_CMD_60_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_60_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_60_DWLEN (1)
+union dped_tnl_l4_ck_cmd_60_u {
+ struct dped_tnl_l4_ck_cmd_60 {
+ u32 value:8; /* [7:0] Default:0x0 RW */
+ u32 len_in_oft:7; /* [14:8] Default:0x0 RW */
+ u32 len_phid:2; /* [16:15] Default:0x0 RW */
+ u32 len_vld:1; /* [17] Default:0x0 RW */
+ u32 data_vld:1; /* [18] Default:0x0 RW */
+ u32 in_oft:7; /* [25:19] Default:0x0 RW */
+ u32 phid:2; /* [27:26] Default:0x0 RW */
+ u32 flag:1; /* [28] Default:0x0 RW */
+ u32 mode:1; /* [29] Default:0x0 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_60_DWLEN];
+} __packed;
+
+#define NBL_DPED_TNL_L4_CK_CMD_61_ADDR (0x75c394)
+#define NBL_DPED_TNL_L4_CK_CMD_61_DEPTH (1)
+#define NBL_DPED_TNL_L4_CK_CMD_61_WIDTH (32)
+#define NBL_DPED_TNL_L4_CK_CMD_61_DWLEN (1)
+union dped_tnl_l4_ck_cmd_61_u {
+ struct dped_tnl_l4_ck_cmd_61 {
+ u32 ck_start0:6; /* [5:0] Default:0x8 RW */
+ u32 ck_phid0:2; /* [7:6] Default:0x2 RW */
+ u32 ck_len0:7; /* [14:8] Default:0x20 RW */
+ u32 ck_vld0:1; /* [15] Default:0x1 RW */
+ u32 ck_start1:6; /* [21:16] Default:0x0 RW */
+ u32 ck_phid1:2; /* [23:22] Default:0x3 RW */
+ u32 ck_len1:7; /* [30:24] Default:0x8 RW */
+ u32 ck_vld1:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TNL_L4_CK_CMD_61_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_00_ADDR (0x75c3a0)
+#define NBL_DPED_MIR_CMD_00_DEPTH (1)
+#define NBL_DPED_MIR_CMD_00_WIDTH (32)
+#define NBL_DPED_MIR_CMD_00_DWLEN (1)
+union dped_mir_cmd_00_u {
+ struct dped_mir_cmd_00 {
+ u32 len:7; /* [6:0] Default:0x0 RW */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 oft:7; /* [14:8] Default:0x0 RW */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 mode:1; /* [16] Default:0x0 RW */
+ u32 en:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_00_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_01_ADDR (0x75c3a4)
+#define NBL_DPED_MIR_CMD_01_DEPTH (1)
+#define NBL_DPED_MIR_CMD_01_WIDTH (32)
+#define NBL_DPED_MIR_CMD_01_DWLEN (1)
+union dped_mir_cmd_01_u {
+ struct dped_mir_cmd_01 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 type_sel:2; /* [17:16] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_01_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_10_ADDR (0x75c3a8)
+#define NBL_DPED_MIR_CMD_10_DEPTH (1)
+#define NBL_DPED_MIR_CMD_10_WIDTH (32)
+#define NBL_DPED_MIR_CMD_10_DWLEN (1)
+union dped_mir_cmd_10_u {
+ struct dped_mir_cmd_10 {
+ u32 len:7; /* [6:0] Default:0x0 RW */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 oft:7; /* [14:8] Default:0x0 RW */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 mode:1; /* [16] Default:0x0 RW */
+ u32 en:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_10_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_11_ADDR (0x75c3ac)
+#define NBL_DPED_MIR_CMD_11_DEPTH (1)
+#define NBL_DPED_MIR_CMD_11_WIDTH (32)
+#define NBL_DPED_MIR_CMD_11_DWLEN (1)
+union dped_mir_cmd_11_u {
+ struct dped_mir_cmd_11 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 type_sel:2; /* [17:16] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_11_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_20_ADDR (0x75c3b0)
+#define NBL_DPED_MIR_CMD_20_DEPTH (1)
+#define NBL_DPED_MIR_CMD_20_WIDTH (32)
+#define NBL_DPED_MIR_CMD_20_DWLEN (1)
+union dped_mir_cmd_20_u {
+ struct dped_mir_cmd_20 {
+ u32 len:7; /* [6:0] Default:0x0 RW */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 oft:7; /* [14:8] Default:0x0 RW */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 mode:1; /* [16] Default:0x0 RW */
+ u32 en:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_20_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_21_ADDR (0x75c3b4)
+#define NBL_DPED_MIR_CMD_21_DEPTH (1)
+#define NBL_DPED_MIR_CMD_21_WIDTH (32)
+#define NBL_DPED_MIR_CMD_21_DWLEN (1)
+union dped_mir_cmd_21_u {
+ struct dped_mir_cmd_21 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 type_sel:2; /* [17:16] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_21_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_30_ADDR (0x75c3b8)
+#define NBL_DPED_MIR_CMD_30_DEPTH (1)
+#define NBL_DPED_MIR_CMD_30_WIDTH (32)
+#define NBL_DPED_MIR_CMD_30_DWLEN (1)
+union dped_mir_cmd_30_u {
+ struct dped_mir_cmd_30 {
+ u32 len:7; /* [6:0] Default:0x0 RW */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 oft:7; /* [14:8] Default:0x0 RW */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 mode:1; /* [16] Default:0x0 RW */
+ u32 en:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_30_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_31_ADDR (0x75c3bc)
+#define NBL_DPED_MIR_CMD_31_DEPTH (1)
+#define NBL_DPED_MIR_CMD_31_WIDTH (32)
+#define NBL_DPED_MIR_CMD_31_DWLEN (1)
+union dped_mir_cmd_31_u {
+ struct dped_mir_cmd_31 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 type_sel:2; /* [17:16] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_31_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_40_ADDR (0x75c3c0)
+#define NBL_DPED_MIR_CMD_40_DEPTH (1)
+#define NBL_DPED_MIR_CMD_40_WIDTH (32)
+#define NBL_DPED_MIR_CMD_40_DWLEN (1)
+union dped_mir_cmd_40_u {
+ struct dped_mir_cmd_40 {
+ u32 len:7; /* [6:0] Default:0x0 RW */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 oft:7; /* [14:8] Default:0x0 RW */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 mode:1; /* [16] Default:0x0 RW */
+ u32 en:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_40_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_41_ADDR (0x75c3c4)
+#define NBL_DPED_MIR_CMD_41_DEPTH (1)
+#define NBL_DPED_MIR_CMD_41_WIDTH (32)
+#define NBL_DPED_MIR_CMD_41_DWLEN (1)
+union dped_mir_cmd_41_u {
+ struct dped_mir_cmd_41 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 type_sel:2; /* [17:16] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_41_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_50_ADDR (0x75c3c8)
+#define NBL_DPED_MIR_CMD_50_DEPTH (1)
+#define NBL_DPED_MIR_CMD_50_WIDTH (32)
+#define NBL_DPED_MIR_CMD_50_DWLEN (1)
+union dped_mir_cmd_50_u {
+ struct dped_mir_cmd_50 {
+ u32 len:7; /* [6:0] Default:0x0 RW */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 oft:7; /* [14:8] Default:0x0 RW */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 mode:1; /* [16] Default:0x0 RW */
+ u32 en:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_50_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_51_ADDR (0x75c3cc)
+#define NBL_DPED_MIR_CMD_51_DEPTH (1)
+#define NBL_DPED_MIR_CMD_51_WIDTH (32)
+#define NBL_DPED_MIR_CMD_51_DWLEN (1)
+union dped_mir_cmd_51_u {
+ struct dped_mir_cmd_51 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 type_sel:2; /* [17:16] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_51_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_60_ADDR (0x75c3d0)
+#define NBL_DPED_MIR_CMD_60_DEPTH (1)
+#define NBL_DPED_MIR_CMD_60_WIDTH (32)
+#define NBL_DPED_MIR_CMD_60_DWLEN (1)
+union dped_mir_cmd_60_u {
+ struct dped_mir_cmd_60 {
+ u32 len:7; /* [6:0] Default:0x0 RW */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 oft:7; /* [14:8] Default:0x0 RW */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 mode:1; /* [16] Default:0x0 RW */
+ u32 en:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_60_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_61_ADDR (0x75c3d4)
+#define NBL_DPED_MIR_CMD_61_DEPTH (1)
+#define NBL_DPED_MIR_CMD_61_WIDTH (32)
+#define NBL_DPED_MIR_CMD_61_DWLEN (1)
+union dped_mir_cmd_61_u {
+ struct dped_mir_cmd_61 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 type_sel:2; /* [17:16] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_61_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_70_ADDR (0x75c3d8)
+#define NBL_DPED_MIR_CMD_70_DEPTH (1)
+#define NBL_DPED_MIR_CMD_70_WIDTH (32)
+#define NBL_DPED_MIR_CMD_70_DWLEN (1)
+union dped_mir_cmd_70_u {
+ struct dped_mir_cmd_70 {
+ u32 len:7; /* [6:0] Default:0x0 RW */
+ u32 rsv2:1; /* [7] Default:0x0 RO */
+ u32 oft:7; /* [14:8] Default:0x0 RW */
+ u32 rsv1:1; /* [15] Default:0x0 RO */
+ u32 mode:1; /* [16] Default:0x0 RW */
+ u32 en:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_70_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIR_CMD_71_ADDR (0x75c3dc)
+#define NBL_DPED_MIR_CMD_71_DEPTH (1)
+#define NBL_DPED_MIR_CMD_71_WIDTH (32)
+#define NBL_DPED_MIR_CMD_71_DWLEN (1)
+union dped_mir_cmd_71_u {
+ struct dped_mir_cmd_71 {
+ u32 vau:16; /* [15:0] Default:0x0 RW */
+ u32 type_sel:2; /* [17:16] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIR_CMD_71_DWLEN];
+} __packed;
+
+#define NBL_DPED_DSCP_CK_EN_ADDR (0x75c3e8)
+#define NBL_DPED_DSCP_CK_EN_DEPTH (1)
+#define NBL_DPED_DSCP_CK_EN_WIDTH (32)
+#define NBL_DPED_DSCP_CK_EN_DWLEN (1)
+union dped_dscp_ck_en_u {
+ struct dped_dscp_ck_en {
+ u32 l4_en:1; /* [0] Default:0x0 RW */
+ u32 l3_en:1; /* [1] Default:0x1 RW */
+ u32 rsv:30; /* [31:2] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_DSCP_CK_EN_DWLEN];
+} __packed;
+
+#define NBL_DPED_RDMA_ECN_REMARK_ADDR (0x75c3f0)
+#define NBL_DPED_RDMA_ECN_REMARK_DEPTH (1)
+#define NBL_DPED_RDMA_ECN_REMARK_WIDTH (32)
+#define NBL_DPED_RDMA_ECN_REMARK_DWLEN (1)
+union dped_rdma_ecn_remark_u {
+ struct dped_rdma_ecn_remark {
+ u32 vau:2; /* [1:0] Default:0x1 RW */
+ u32 rsv1:2; /* [3:2] Default:0x0 RO */
+ u32 en:1; /* [4] Default:0x0 RW */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_RDMA_ECN_REMARK_DWLEN];
+} __packed;
+
+#define NBL_DPED_VLAN_OFFSET_ADDR (0x75c3f4)
+#define NBL_DPED_VLAN_OFFSET_DEPTH (1)
+#define NBL_DPED_VLAN_OFFSET_WIDTH (32)
+#define NBL_DPED_VLAN_OFFSET_DWLEN (1)
+union dped_vlan_offset_u {
+ struct dped_vlan_offset {
+ u32 oft:8; /* [7:0] Default:0xC RW */
+ u32 rsv:24; /* [31:8] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_VLAN_OFFSET_DWLEN];
+} __packed;
+
+#define NBL_DPED_DSCP_OFFSET_0_ADDR (0x75c3f8)
+#define NBL_DPED_DSCP_OFFSET_0_DEPTH (1)
+#define NBL_DPED_DSCP_OFFSET_0_WIDTH (32)
+#define NBL_DPED_DSCP_OFFSET_0_DWLEN (1)
+union dped_dscp_offset_0_u {
+ struct dped_dscp_offset_0 {
+ u32 oft:8; /* [7:0] Default:0x8 RW */
+ u32 rsv:24; /* [31:8] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_DSCP_OFFSET_0_DWLEN];
+} __packed;
+
+#define NBL_DPED_DSCP_OFFSET_1_ADDR (0x75c3fc)
+#define NBL_DPED_DSCP_OFFSET_1_DEPTH (1)
+#define NBL_DPED_DSCP_OFFSET_1_WIDTH (32)
+#define NBL_DPED_DSCP_OFFSET_1_DWLEN (1)
+union dped_dscp_offset_1_u {
+ struct dped_dscp_offset_1 {
+ u32 oft:8; /* [7:0] Default:0x4 RW */
+ u32 rsv:24; /* [31:8] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_DSCP_OFFSET_1_DWLEN];
+} __packed;
+
+#define NBL_DPED_CFG_TEST_ADDR (0x75c600)
+#define NBL_DPED_CFG_TEST_DEPTH (1)
+#define NBL_DPED_CFG_TEST_WIDTH (32)
+#define NBL_DPED_CFG_TEST_DWLEN (1)
+union dped_cfg_test_u {
+ struct dped_cfg_test {
+ u32 test:32; /* [31:00] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_CFG_TEST_DWLEN];
+} __packed;
+
+#define NBL_DPED_BP_STATE_ADDR (0x75c608)
+#define NBL_DPED_BP_STATE_DEPTH (1)
+#define NBL_DPED_BP_STATE_WIDTH (32)
+#define NBL_DPED_BP_STATE_DWLEN (1)
+union dped_bp_state_u {
+ struct dped_bp_state {
+ u32 bm_rtn_tout:1; /* [0] Default:0x0 RO */
+ u32 bm_not_rdy:1; /* [1] Default:0x0 RO */
+ u32 dprbac_fc:1; /* [2] Default:0x0 RO */
+ u32 qm_fc:1; /* [3] Default:0x0 RO */
+ u32 rsv:28; /* [31:04] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_BP_STATE_DWLEN];
+} __packed;
+
+#define NBL_DPED_BP_HISTORY_ADDR (0x75c60c)
+#define NBL_DPED_BP_HISTORY_DEPTH (1)
+#define NBL_DPED_BP_HISTORY_WIDTH (32)
+#define NBL_DPED_BP_HISTORY_DWLEN (1)
+union dped_bp_history_u {
+ struct dped_bp_history {
+ u32 bm_rtn_tout:1; /* [0] Default:0x0 RC */
+ u32 bm_not_rdy:1; /* [1] Default:0x0 RC */
+ u32 dprbac_fc:1; /* [2] Default:0x0 RC */
+ u32 qm_fc:1; /* [3] Default:0x0 RC */
+ u32 rsv:28; /* [31:04] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_BP_HISTORY_DWLEN];
+} __packed;
+
+#define NBL_DPED_MIRID_IND_ADDR (0x75c900)
+#define NBL_DPED_MIRID_IND_DEPTH (1)
+#define NBL_DPED_MIRID_IND_WIDTH (32)
+#define NBL_DPED_MIRID_IND_DWLEN (1)
+union dped_mirid_ind_u {
+ struct dped_mirid_ind {
+ u32 nomat:1; /* [0] Default:0x0 RC */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MIRID_IND_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_AUX_OFT_ADDR (0x75c904)
+#define NBL_DPED_MD_AUX_OFT_DEPTH (1)
+#define NBL_DPED_MD_AUX_OFT_WIDTH (32)
+#define NBL_DPED_MD_AUX_OFT_DWLEN (1)
+union dped_md_aux_oft_u {
+ struct dped_md_aux_oft {
+ u32 l2_oft:8; /* [7:0] Default:0x0 RO */
+ u32 l3_oft:8; /* [15:8] Default:0x0 RO */
+ u32 l4_oft:8; /* [23:16] Default:0x0 RO */
+ u32 pld_oft:8; /* [31:24] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_AUX_OFT_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_AUX_PKT_LEN_ADDR (0x75c908)
+#define NBL_DPED_MD_AUX_PKT_LEN_DEPTH (1)
+#define NBL_DPED_MD_AUX_PKT_LEN_WIDTH (32)
+#define NBL_DPED_MD_AUX_PKT_LEN_DWLEN (1)
+union dped_md_aux_pkt_len_u {
+ struct dped_md_aux_pkt_len {
+ u32 len:14; /* [13:0] Default:0x0 RO */
+ u32 rsv:18; /* [31:14] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_AUX_PKT_LEN_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_FWD_MIR_ADDR (0x75c90c)
+#define NBL_DPED_MD_FWD_MIR_DEPTH (1)
+#define NBL_DPED_MD_FWD_MIR_WIDTH (32)
+#define NBL_DPED_MD_FWD_MIR_DWLEN (1)
+union dped_md_fwd_mir_u {
+ struct dped_md_fwd_mir {
+ u32 id:4; /* [3:0] Default:0x0 RO */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_FWD_MIR_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_FWD_DPORT_ADDR (0x75c910)
+#define NBL_DPED_MD_FWD_DPORT_DEPTH (1)
+#define NBL_DPED_MD_FWD_DPORT_WIDTH (32)
+#define NBL_DPED_MD_FWD_DPORT_DWLEN (1)
+union dped_md_fwd_dport_u {
+ struct dped_md_fwd_dport {
+ u32 id:16; /* [15:0] Default:0x0 RO */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_FWD_DPORT_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_AUX_PLD_CKSUM_ADDR (0x75c914)
+#define NBL_DPED_MD_AUX_PLD_CKSUM_DEPTH (1)
+#define NBL_DPED_MD_AUX_PLD_CKSUM_WIDTH (32)
+#define NBL_DPED_MD_AUX_PLD_CKSUM_DWLEN (1)
+union dped_md_aux_pld_cksum_u {
+ struct dped_md_aux_pld_cksum {
+ u32 ck:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_AUX_PLD_CKSUM_DWLEN];
+} __packed;
+
+#define NBL_DPED_INNER_PKT_CKSUM_ADDR (0x75c918)
+#define NBL_DPED_INNER_PKT_CKSUM_DEPTH (1)
+#define NBL_DPED_INNER_PKT_CKSUM_WIDTH (32)
+#define NBL_DPED_INNER_PKT_CKSUM_DWLEN (1)
+union dped_inner_pkt_cksum_u {
+ struct dped_inner_pkt_cksum {
+ u32 ck:16; /* [15:0] Default:0x0 RO */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_INNER_PKT_CKSUM_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_0_ADDR (0x75c920)
+#define NBL_DPED_MD_EDIT_0_DEPTH (1)
+#define NBL_DPED_MD_EDIT_0_WIDTH (32)
+#define NBL_DPED_MD_EDIT_0_DWLEN (1)
+union dped_md_edit_0_u {
+ struct dped_md_edit_0 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_0_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_1_ADDR (0x75c924)
+#define NBL_DPED_MD_EDIT_1_DEPTH (1)
+#define NBL_DPED_MD_EDIT_1_WIDTH (32)
+#define NBL_DPED_MD_EDIT_1_DWLEN (1)
+union dped_md_edit_1_u {
+ struct dped_md_edit_1 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_1_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_2_ADDR (0x75c928)
+#define NBL_DPED_MD_EDIT_2_DEPTH (1)
+#define NBL_DPED_MD_EDIT_2_WIDTH (32)
+#define NBL_DPED_MD_EDIT_2_DWLEN (1)
+union dped_md_edit_2_u {
+ struct dped_md_edit_2 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_2_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_3_ADDR (0x75c92c)
+#define NBL_DPED_MD_EDIT_3_DEPTH (1)
+#define NBL_DPED_MD_EDIT_3_WIDTH (32)
+#define NBL_DPED_MD_EDIT_3_DWLEN (1)
+union dped_md_edit_3_u {
+ struct dped_md_edit_3 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_3_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_4_ADDR (0x75c930)
+#define NBL_DPED_MD_EDIT_4_DEPTH (1)
+#define NBL_DPED_MD_EDIT_4_WIDTH (32)
+#define NBL_DPED_MD_EDIT_4_DWLEN (1)
+union dped_md_edit_4_u {
+ struct dped_md_edit_4 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_4_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_5_ADDR (0x75c934)
+#define NBL_DPED_MD_EDIT_5_DEPTH (1)
+#define NBL_DPED_MD_EDIT_5_WIDTH (32)
+#define NBL_DPED_MD_EDIT_5_DWLEN (1)
+union dped_md_edit_5_u {
+ struct dped_md_edit_5 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_5_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_6_ADDR (0x75c938)
+#define NBL_DPED_MD_EDIT_6_DEPTH (1)
+#define NBL_DPED_MD_EDIT_6_WIDTH (32)
+#define NBL_DPED_MD_EDIT_6_DWLEN (1)
+union dped_md_edit_6_u {
+ struct dped_md_edit_6 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_6_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_7_ADDR (0x75c93c)
+#define NBL_DPED_MD_EDIT_7_DEPTH (1)
+#define NBL_DPED_MD_EDIT_7_WIDTH (32)
+#define NBL_DPED_MD_EDIT_7_DWLEN (1)
+union dped_md_edit_7_u {
+ struct dped_md_edit_7 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_7_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_8_ADDR (0x75c940)
+#define NBL_DPED_MD_EDIT_8_DEPTH (1)
+#define NBL_DPED_MD_EDIT_8_WIDTH (32)
+#define NBL_DPED_MD_EDIT_8_DWLEN (1)
+union dped_md_edit_8_u {
+ struct dped_md_edit_8 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_8_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_9_ADDR (0x75c944)
+#define NBL_DPED_MD_EDIT_9_DEPTH (1)
+#define NBL_DPED_MD_EDIT_9_WIDTH (32)
+#define NBL_DPED_MD_EDIT_9_DWLEN (1)
+union dped_md_edit_9_u {
+ struct dped_md_edit_9 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_9_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_10_ADDR (0x75c948)
+#define NBL_DPED_MD_EDIT_10_DEPTH (1)
+#define NBL_DPED_MD_EDIT_10_WIDTH (32)
+#define NBL_DPED_MD_EDIT_10_DWLEN (1)
+union dped_md_edit_10_u {
+ struct dped_md_edit_10 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_10_DWLEN];
+} __packed;
+
+#define NBL_DPED_MD_EDIT_11_ADDR (0x75c94c)
+#define NBL_DPED_MD_EDIT_11_DEPTH (1)
+#define NBL_DPED_MD_EDIT_11_WIDTH (32)
+#define NBL_DPED_MD_EDIT_11_DWLEN (1)
+union dped_md_edit_11_u {
+ struct dped_md_edit_11 {
+ u32 vau:16; /* [15:0] Default:0x0 RO */
+ u32 id:6; /* [21:16] Default:0x0 RO */
+ u32 rsv:10; /* [31:22] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_MD_EDIT_11_DWLEN];
+} __packed;
+
+#define NBL_DPED_ADD_DEL_LEN_ADDR (0x75c950)
+#define NBL_DPED_ADD_DEL_LEN_DEPTH (1)
+#define NBL_DPED_ADD_DEL_LEN_WIDTH (32)
+#define NBL_DPED_ADD_DEL_LEN_DWLEN (1)
+union dped_add_del_len_u {
+ struct dped_add_del_len {
+ u32 len:9; /* [8:0] Default:0x0 RO */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_ADD_DEL_LEN_DWLEN];
+} __packed;
+
+#define NBL_DPED_TTL_INFO_ADDR (0x75c970)
+#define NBL_DPED_TTL_INFO_DEPTH (1)
+#define NBL_DPED_TTL_INFO_WIDTH (32)
+#define NBL_DPED_TTL_INFO_DWLEN (1)
+union dped_ttl_info_u {
+ struct dped_ttl_info {
+ u32 old_ttl:8; /* [7:0] Default:0x0 RO */
+ u32 new_ttl:8; /* [15:8] Default:0x0 RO */
+ u32 ttl_val:1; /* [16] Default:0x0 RC */
+ u32 rsv:15; /* [31:17] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TTL_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_LEN_INFO_VLD_ADDR (0x75c974)
+#define NBL_DPED_LEN_INFO_VLD_DEPTH (1)
+#define NBL_DPED_LEN_INFO_VLD_WIDTH (32)
+#define NBL_DPED_LEN_INFO_VLD_DWLEN (1)
+union dped_len_info_vld_u {
+ struct dped_len_info_vld {
+ u32 length0:1; /* [0] Default:0x0 RC */
+ u32 length1:1; /* [1] Default:0x0 RC */
+ u32 rsv:30; /* [31:2] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_LEN_INFO_VLD_DWLEN];
+} __packed;
+
+#define NBL_DPED_LEN0_INFO_ADDR (0x75c978)
+#define NBL_DPED_LEN0_INFO_DEPTH (1)
+#define NBL_DPED_LEN0_INFO_WIDTH (32)
+#define NBL_DPED_LEN0_INFO_DWLEN (1)
+union dped_len0_info_u {
+ struct dped_len0_info {
+ u32 old_len:16; /* [15:0] Default:0x0 RO */
+ u32 new_len:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_LEN0_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_LEN1_INFO_ADDR (0x75c97c)
+#define NBL_DPED_LEN1_INFO_DEPTH (1)
+#define NBL_DPED_LEN1_INFO_WIDTH (32)
+#define NBL_DPED_LEN1_INFO_DWLEN (1)
+union dped_len1_info_u {
+ struct dped_len1_info {
+ u32 old_len:16; /* [15:0] Default:0x0 RO */
+ u32 new_len:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_LEN1_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_EDIT_ATNUM_INFO_ADDR (0x75c980)
+#define NBL_DPED_EDIT_ATNUM_INFO_DEPTH (1)
+#define NBL_DPED_EDIT_ATNUM_INFO_WIDTH (32)
+#define NBL_DPED_EDIT_ATNUM_INFO_DWLEN (1)
+union dped_edit_atnum_info_u {
+ struct dped_edit_atnum_info {
+ u32 replace:4; /* [3:0] Default:0x0 RO */
+ u32 del:4; /* [7:4] Default:0x0 RO */
+ u32 add:4; /* [11:8] Default:0x0 RO */
+ u32 ttl:4; /* [15:12] Default:0x0 RO */
+ u32 dscp:4; /* [19:16] Default:0x0 RO */
+ u32 tnl:4; /* [23:20] Default:0x0 RO */
+ u32 sport:4; /* [27:24] Default:0x0 RO */
+ u32 rsv:4; /* [31:28] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_EDIT_ATNUM_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_EDIT_NO_AT_INFO_ADDR (0x75c984)
+#define NBL_DPED_EDIT_NO_AT_INFO_DEPTH (1)
+#define NBL_DPED_EDIT_NO_AT_INFO_WIDTH (32)
+#define NBL_DPED_EDIT_NO_AT_INFO_DWLEN (1)
+union dped_edit_no_at_info_u {
+ struct dped_edit_no_at_info {
+ u32 l3_len:1; /* [0] Default:0x0 RC */
+ u32 l4_len:1; /* [1] Default:0x0 RC */
+ u32 l3_ck:1; /* [2] Default:0x0 RC */
+ u32 l4_ck:1; /* [3] Default:0x0 RC */
+ u32 sctp_ck:1; /* [4] Default:0x0 RC */
+ u32 padding:1; /* [5] Default:0x0 RC */
+ u32 rsv:26; /* [31:06] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_EDIT_NO_AT_INFO_DWLEN];
+} __packed;
+
+#define NBL_DPED_HW_EDT_PROF_ADDR (0x75d000)
+#define NBL_DPED_HW_EDT_PROF_DEPTH (32)
+#define NBL_DPED_HW_EDT_PROF_WIDTH (32)
+#define NBL_DPED_HW_EDT_PROF_DWLEN (1)
+union dped_hw_edt_prof_u {
+ struct dped_hw_edt_prof {
+ u32 l4_len:2; /* [1:0] Default:0x2 RW */
+ u32 l3_len:2; /* [3:2] Default:0x2 RW */
+ u32 l4_ck:3; /* [6:4] Default:0x7 RW */
+ u32 l3_ck:1; /* [7:7] Default:0x0 RW */
+ u32 l4_ck_zero_free:1; /* [8:8] Default:0x1 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_HW_EDT_PROF_DWLEN];
+} __packed;
+#define NBL_DPED_HW_EDT_PROF_REG(r) (NBL_DPED_HW_EDT_PROF_ADDR + \
+ (NBL_DPED_HW_EDT_PROF_DWLEN * 4) * (r))
+
+#define NBL_DPED_OUT_MASK_ADDR (0x75e000)
+#define NBL_DPED_OUT_MASK_DEPTH (24)
+#define NBL_DPED_OUT_MASK_WIDTH (64)
+#define NBL_DPED_OUT_MASK_DWLEN (2)
+union dped_out_mask_u {
+ struct dped_out_mask {
+ u32 flag:32; /* [31:0] Default:0x0 RW */
+ u32 fwd:30; /* [61:32] Default:0x0 RW */
+ u32 rsv:2; /* [63:62] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_OUT_MASK_DWLEN];
+} __packed;
+#define NBL_DPED_OUT_MASK_REG(r) (NBL_DPED_OUT_MASK_ADDR + \
+ (NBL_DPED_OUT_MASK_DWLEN * 4) * (r))
+
+#define NBL_DPED_TAB_EDIT_CMD_ADDR (0x75f000)
+#define NBL_DPED_TAB_EDIT_CMD_DEPTH (32)
+#define NBL_DPED_TAB_EDIT_CMD_WIDTH (32)
+#define NBL_DPED_TAB_EDIT_CMD_DWLEN (1)
+union dped_tab_edit_cmd_u {
+ struct dped_tab_edit_cmd {
+ u32 in_offset:8; /* [7:0] Default:0x0 RW */
+ u32 phid:2; /* [9:8] Default:0x0 RW */
+ u32 len:7; /* [16:10] Default:0x0 RW */
+ u32 mode:4; /* [20:17] Default:0xf RW */
+ u32 l4_ck_ofld_upt:1; /* [21] Default:0x1 RW */
+ u32 l3_ck_ofld_upt:1; /* [22] Default:0x1 RW */
+ u32 rsv:9; /* [31:23] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TAB_EDIT_CMD_DWLEN];
+} __packed;
+#define NBL_DPED_TAB_EDIT_CMD_REG(r) (NBL_DPED_TAB_EDIT_CMD_ADDR + \
+ (NBL_DPED_TAB_EDIT_CMD_DWLEN * 4) * (r))
+
+#define NBL_DPED_TAB_MIR_ADDR (0x760000)
+#define NBL_DPED_TAB_MIR_DEPTH (8)
+#define NBL_DPED_TAB_MIR_WIDTH (1024)
+#define NBL_DPED_TAB_MIR_DWLEN (32)
+union dped_tab_mir_u {
+ struct dped_tab_mir {
+ u32 cfg_mir_data:16; /* [719:0] Default:0x0 RW */
+ u32 cfg_mir_data_arr[22]; /* [719:0] Default:0x0 RW */
+ u32 cfg_mir_info_l:32; /* [755:720] Default:0x0 RW */
+ u32 cfg_mir_info_h:4; /* [755:720] Default:0x0 RW */
+ u32 rsv:12; /* [1023:756] Default:0x0 RO */
+ u32 rsv_arr[8]; /* [1023:756] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TAB_MIR_DWLEN];
+} __packed;
+#define NBL_DPED_TAB_MIR_REG(r) (NBL_DPED_TAB_MIR_ADDR + \
+ (NBL_DPED_TAB_MIR_DWLEN * 4) * (r))
+
+#define NBL_DPED_TAB_VSI_TYPE_ADDR (0x761000)
+#define NBL_DPED_TAB_VSI_TYPE_DEPTH (1031)
+#define NBL_DPED_TAB_VSI_TYPE_WIDTH (32)
+#define NBL_DPED_TAB_VSI_TYPE_DWLEN (1)
+union dped_tab_vsi_type_u {
+ struct dped_tab_vsi_type {
+ u32 sel:4; /* [3:0] Default:0x0 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TAB_VSI_TYPE_DWLEN];
+} __packed;
+#define NBL_DPED_TAB_VSI_TYPE_REG(r) (NBL_DPED_TAB_VSI_TYPE_ADDR + \
+ (NBL_DPED_TAB_VSI_TYPE_DWLEN * 4) * (r))
+
+#define NBL_DPED_TAB_REPLACE_ADDR (0x763000)
+#define NBL_DPED_TAB_REPLACE_DEPTH (2048)
+#define NBL_DPED_TAB_REPLACE_WIDTH (64)
+#define NBL_DPED_TAB_REPLACE_DWLEN (2)
+union dped_tab_replace_u {
+ struct dped_tab_replace {
+ u32 vau_arr[2]; /* [63:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DPED_TAB_REPLACE_DWLEN];
+} __packed;
+#define NBL_DPED_TAB_REPLACE_REG(r) (NBL_DPED_TAB_REPLACE_ADDR + \
+ (NBL_DPED_TAB_REPLACE_DWLEN * 4) * (r))
+
+#define NBL_DPED_TAB_TNL_ADDR (0x7dc000)
+#define NBL_DPED_TAB_TNL_DEPTH (4096)
+#define NBL_DPED_TAB_TNL_WIDTH (1024)
+#define NBL_DPED_TAB_TNL_DWLEN (32)
+union dped_tab_tnl_u {
+ struct dped_tab_tnl {
+ u32 cfg_tnl_data:16; /* [719:0] Default:0x0 RW */
+ u32 cfg_tnl_data_arr[22]; /* [719:0] Default:0x0 RW */
+ u32 cfg_tnl_info:8; /* [791:720] Default:0x0 RW */
+ u32 cfg_tnl_info_arr[2]; /* [791:720] Default:0x0 RW */
+ u32 rsv_l:32; /* [1023:792] Default:0x0 RO */
+ u32 rsv_h:8; /* [1023:792] Default:0x0 RO */
+ u32 rsv_arr[6]; /* [1023:792] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DPED_TAB_TNL_DWLEN];
+} __packed;
+#define NBL_DPED_TAB_TNL_REG(r) (NBL_DPED_TAB_TNL_ADDR + \
+ (NBL_DPED_TAB_TNL_DWLEN * 4) * (r))
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h
new file mode 100644
index 000000000000..554ef4592189
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h
@@ -0,0 +1,929 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+// Code generated by interstellar. DO NOT EDIT.
+// Compatible with leonis RTL tag 0710
+
+#ifndef NBL_DSTORE_H
+#define NBL_DSTORE_H 1
+
+#include <linux/types.h>
+
+#define NBL_DSTORE_BASE (0x00704000)
+
+#define NBL_DSTORE_INT_STATUS_ADDR (0x704000)
+#define NBL_DSTORE_INT_STATUS_DEPTH (1)
+#define NBL_DSTORE_INT_STATUS_WIDTH (32)
+#define NBL_DSTORE_INT_STATUS_DWLEN (1)
+union dstore_int_status_u {
+ struct dstore_int_status {
+ u32 ucor_err:1; /* [0] Default:0x0 RWC */
+ u32 cor_err:1; /* [1] Default:0x0 RWC */
+ u32 fifo_uflw_err:1; /* [2] Default:0x0 RWC */
+ u32 fifo_dflw_err:1; /* [3] Default:0x0 RWC */
+ u32 cif_err:1; /* [4] Default:0x0 RWC */
+ u32 parity_err:1; /* [5] Default:0x0 RWC */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_INT_STATUS_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_INT_MASK_ADDR (0x704004)
+#define NBL_DSTORE_INT_MASK_DEPTH (1)
+#define NBL_DSTORE_INT_MASK_WIDTH (32)
+#define NBL_DSTORE_INT_MASK_DWLEN (1)
+union dstore_int_mask_u {
+ struct dstore_int_mask {
+ u32 ucor_err:1; /* [0] Default:0x0 RW */
+ u32 cor_err:1; /* [1] Default:0x0 RW */
+ u32 fifo_uflw_err:1; /* [2] Default:0x0 RW */
+ u32 fifo_dflw_err:1; /* [3] Default:0x0 RW */
+ u32 cif_err:1; /* [4] Default:0x0 RW */
+ u32 parity_err:1; /* [5] Default:0x0 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_INT_MASK_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_INT_SET_ADDR (0x704008)
+#define NBL_DSTORE_INT_SET_DEPTH (0)
+#define NBL_DSTORE_INT_SET_WIDTH (32)
+#define NBL_DSTORE_INT_SET_DWLEN (1)
+union dstore_int_set_u {
+ struct dstore_int_set {
+ u32 ucor_err:1; /* [0] Default:0x0 WO */
+ u32 cor_err:1; /* [1] Default:0x0 WO */
+ u32 fifo_uflw_err:1; /* [2] Default:0x0 WO */
+ u32 fifo_dflw_err:1; /* [3] Default:0x0 WO */
+ u32 cif_err:1; /* [4] Default:0x0 WO */
+ u32 parity_err:1; /* [5] Default:0x0 WO */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_INT_SET_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_COR_ERR_INFO_ADDR (0x70400c)
+#define NBL_DSTORE_COR_ERR_INFO_DEPTH (1)
+#define NBL_DSTORE_COR_ERR_INFO_WIDTH (32)
+#define NBL_DSTORE_COR_ERR_INFO_DWLEN (1)
+union dstore_cor_err_info_u {
+ struct dstore_cor_err_info {
+ u32 ram_addr:10; /* [9:0] Default:0x0 RO */
+ u32 rsv1:6; /* [15:10] Default:0x0 RO */
+ u32 ram_id:4; /* [19:16] Default:0x0 RO */
+ u32 rsv:12; /* [31:20] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_COR_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_PARITY_ERR_INFO_ADDR (0x704014)
+#define NBL_DSTORE_PARITY_ERR_INFO_DEPTH (1)
+#define NBL_DSTORE_PARITY_ERR_INFO_WIDTH (32)
+#define NBL_DSTORE_PARITY_ERR_INFO_DWLEN (1)
+union dstore_parity_err_info_u {
+ struct dstore_parity_err_info {
+ u32 ram_addr:10; /* [9:0] Default:0x0 RO */
+ u32 rsv1:6; /* [15:10] Default:0x0 RO */
+ u32 ram_id:4; /* [19:16] Default:0x0 RO */
+ u32 rsv:12; /* [31:20] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_PARITY_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_CIF_ERR_INFO_ADDR (0x70401c)
+#define NBL_DSTORE_CIF_ERR_INFO_DEPTH (1)
+#define NBL_DSTORE_CIF_ERR_INFO_WIDTH (32)
+#define NBL_DSTORE_CIF_ERR_INFO_DWLEN (1)
+union dstore_cif_err_info_u {
+ struct dstore_cif_err_info {
+ u32 addr:30; /* [29:0] Default:0x0 RO */
+ u32 wr_err:1; /* [30] Default:0x0 RO */
+ u32 ucor_err:1; /* [31] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_CIF_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_CAR_CTRL_ADDR (0x704100)
+#define NBL_DSTORE_CAR_CTRL_DEPTH (1)
+#define NBL_DSTORE_CAR_CTRL_WIDTH (32)
+#define NBL_DSTORE_CAR_CTRL_DWLEN (1)
+union dstore_car_ctrl_u {
+ struct dstore_car_ctrl {
+ u32 sctr_car:1; /* [0] Default:0x1 RW */
+ u32 rctr_car:1; /* [1] Default:0x1 RW */
+ u32 rc_car:1; /* [2] Default:0x1 RW */
+ u32 tbl_rc_car:1; /* [3] Default:0x1 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_CAR_CTRL_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_INIT_START_ADDR (0x704104)
+#define NBL_DSTORE_INIT_START_DEPTH (1)
+#define NBL_DSTORE_INIT_START_WIDTH (32)
+#define NBL_DSTORE_INIT_START_DWLEN (1)
+union dstore_init_start_u {
+ struct dstore_init_start {
+ u32 init_start:1; /* [0] Default:0x0 WO */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_INIT_START_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_PKT_LEN_ADDR (0x704108)
+#define NBL_DSTORE_PKT_LEN_DEPTH (1)
+#define NBL_DSTORE_PKT_LEN_WIDTH (32)
+#define NBL_DSTORE_PKT_LEN_DWLEN (1)
+union dstore_pkt_len_u {
+ struct dstore_pkt_len {
+ u32 min:7; /* [6:0] Default:60 RW */
+ u32 rsv1:8; /* [14:7] Default:0x0 RO */
+ u32 min_chk_en:1; /* [15] Default:0x0 RW */
+ u32 max:14; /* [29:16] Default:9600 RW */
+ u32 rsv:1; /* [30] Default:0x0 RO */
+ u32 max_chk_en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_PKT_LEN_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_SCH_PD_BUFFER_TH_ADDR (0x704128)
+#define NBL_DSTORE_SCH_PD_BUFFER_TH_DEPTH (1)
+#define NBL_DSTORE_SCH_PD_BUFFER_TH_WIDTH (32)
+#define NBL_DSTORE_SCH_PD_BUFFER_TH_DWLEN (1)
+union dstore_sch_pd_buffer_th_u {
+ struct dstore_sch_pd_buffer_th {
+ u32 aful_th:9; /* [8:0] Default:500 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_SCH_PD_BUFFER_TH_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_GLB_FC_TH_ADDR (0x70412c)
+#define NBL_DSTORE_GLB_FC_TH_DEPTH (1)
+#define NBL_DSTORE_GLB_FC_TH_WIDTH (32)
+#define NBL_DSTORE_GLB_FC_TH_DWLEN (1)
+union dstore_glb_fc_th_u {
+ struct dstore_glb_fc_th {
+ u32 xoff_th:10; /* [9:0] Default:900 RW */
+ u32 rsv1:6; /* [15:10] Default:0x0 RO */
+ u32 xon_th:10; /* [25:16] Default:850 RW */
+ u32 rsv:5; /* [30:26] Default:0x0 RO */
+ u32 fc_en:1; /* [31:31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_GLB_FC_TH_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_GLB_DROP_TH_ADDR (0x704130)
+#define NBL_DSTORE_GLB_DROP_TH_DEPTH (1)
+#define NBL_DSTORE_GLB_DROP_TH_WIDTH (32)
+#define NBL_DSTORE_GLB_DROP_TH_DWLEN (1)
+union dstore_glb_drop_th_u {
+ struct dstore_glb_drop_th {
+ u32 disc_th:10; /* [9:0] Default:985 RW */
+ u32 rsv:21; /* [30:10] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_GLB_DROP_TH_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_PORT_FC_TH_ADDR (0x704134)
+#define NBL_DSTORE_PORT_FC_TH_DEPTH (6)
+#define NBL_DSTORE_PORT_FC_TH_WIDTH (32)
+#define NBL_DSTORE_PORT_FC_TH_DWLEN (1)
+union dstore_port_fc_th_u {
+ struct dstore_port_fc_th {
+ u32 xoff_th:10; /* [9:0] Default:400 RW */
+ u32 rsv1:6; /* [15:10] Default:0x0 RO */
+ u32 xon_th:10; /* [25:16] Default:400 RW */
+ u32 rsv:4; /* [29:26] Default:0x0 RO */
+ u32 fc_set:1; /* [30:30] Default:0x0 RW */
+ u32 fc_en:1; /* [31:31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_PORT_FC_TH_DWLEN];
+} __packed;
+#define NBL_DSTORE_PORT_FC_TH_REG(r) (NBL_DSTORE_PORT_FC_TH_ADDR + \
+ (NBL_DSTORE_PORT_FC_TH_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_PORT_DROP_TH_ADDR (0x704150)
+#define NBL_DSTORE_PORT_DROP_TH_DEPTH (6)
+#define NBL_DSTORE_PORT_DROP_TH_WIDTH (32)
+#define NBL_DSTORE_PORT_DROP_TH_DWLEN (1)
+union dstore_port_drop_th_u {
+ struct dstore_port_drop_th {
+ u32 disc_th:10; /* [9:0] Default:800 RW */
+ u32 rsv:21; /* [30:10] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_PORT_DROP_TH_DWLEN];
+} __packed;
+#define NBL_DSTORE_PORT_DROP_TH_REG(r) (NBL_DSTORE_PORT_DROP_TH_ADDR + \
+ (NBL_DSTORE_PORT_DROP_TH_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_CFG_TEST_ADDR (0x704170)
+#define NBL_DSTORE_CFG_TEST_DEPTH (1)
+#define NBL_DSTORE_CFG_TEST_WIDTH (32)
+#define NBL_DSTORE_CFG_TEST_DWLEN (1)
+union dstore_cfg_test_u {
+ struct dstore_cfg_test {
+ u32 test:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_CFG_TEST_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_HIGH_PRI_PKT_ADDR (0x70417c)
+#define NBL_DSTORE_HIGH_PRI_PKT_DEPTH (1)
+#define NBL_DSTORE_HIGH_PRI_PKT_WIDTH (32)
+#define NBL_DSTORE_HIGH_PRI_PKT_DWLEN (1)
+union dstore_high_pri_pkt_u {
+ struct dstore_high_pri_pkt {
+ u32 en:1; /* [0:0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_HIGH_PRI_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_COS_FC_TH_ADDR (0x704200)
+#define NBL_DSTORE_COS_FC_TH_DEPTH (48)
+#define NBL_DSTORE_COS_FC_TH_WIDTH (32)
+#define NBL_DSTORE_COS_FC_TH_DWLEN (1)
+union dstore_cos_fc_th_u {
+ struct dstore_cos_fc_th {
+ u32 xoff_th:10; /* [9:0] Default:100 RW */
+ u32 rsv1:6; /* [15:10] Default:0x0 RO */
+ u32 xon_th:10; /* [25:16] Default:100 RW */
+ u32 rsv:4; /* [29:26] Default:0x0 RO */
+ u32 fc_set:1; /* [30:30] Default:0x0 RW */
+ u32 fc_en:1; /* [31:31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_COS_FC_TH_DWLEN];
+} __packed;
+#define NBL_DSTORE_COS_FC_TH_REG(r) (NBL_DSTORE_COS_FC_TH_ADDR + \
+ (NBL_DSTORE_COS_FC_TH_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_COS_DROP_TH_ADDR (0x704300)
+#define NBL_DSTORE_COS_DROP_TH_DEPTH (48)
+#define NBL_DSTORE_COS_DROP_TH_WIDTH (32)
+#define NBL_DSTORE_COS_DROP_TH_DWLEN (1)
+union dstore_cos_drop_th_u {
+ struct dstore_cos_drop_th {
+ u32 disc_th:10; /* [9:0] Default:120 RW */
+ u32 rsv:21; /* [30:10] Default:0x0 RO */
+ u32 en:1; /* [31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_COS_DROP_TH_DWLEN];
+} __packed;
+#define NBL_DSTORE_COS_DROP_TH_REG(r) (NBL_DSTORE_COS_DROP_TH_ADDR + \
+ (NBL_DSTORE_COS_DROP_TH_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_SCH_PD_WRR_WGT_ADDR (0x704400)
+#define NBL_DSTORE_SCH_PD_WRR_WGT_DEPTH (36)
+#define NBL_DSTORE_SCH_PD_WRR_WGT_WIDTH (32)
+#define NBL_DSTORE_SCH_PD_WRR_WGT_DWLEN (1)
+union dstore_sch_pd_wrr_wgt_u {
+ struct dstore_sch_pd_wrr_wgt {
+ u32 wgt_cos:4; /* [3:0] Default:0x0 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_SCH_PD_WRR_WGT_DWLEN];
+} __packed;
+#define NBL_DSTORE_SCH_PD_WRR_WGT_REG(r) (NBL_DSTORE_SCH_PD_WRR_WGT_ADDR + \
+ (NBL_DSTORE_SCH_PD_WRR_WGT_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_COS7_FORCE_ADDR (0x704504)
+#define NBL_DSTORE_COS7_FORCE_DEPTH (1)
+#define NBL_DSTORE_COS7_FORCE_WIDTH (32)
+#define NBL_DSTORE_COS7_FORCE_DWLEN (1)
+union dstore_cos7_force_u {
+ struct dstore_cos7_force {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_COS7_FORCE_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_D_DPORT_FC_TH_ADDR (0x704600)
+#define NBL_DSTORE_D_DPORT_FC_TH_DEPTH (5)
+#define NBL_DSTORE_D_DPORT_FC_TH_WIDTH (32)
+#define NBL_DSTORE_D_DPORT_FC_TH_DWLEN (1)
+union dstore_d_dport_fc_th_u {
+ struct dstore_d_dport_fc_th {
+ u32 xoff_th:11; /* [10:0] Default:200 RW */
+ u32 rsv1:5; /* [15:11] Default:0x0 RO */
+ u32 xon_th:11; /* [26:16] Default:100 RW */
+ u32 rsv:3; /* [29:27] Default:0x0 RO */
+ u32 fc_set:1; /* [30:30] Default:0x0 RW */
+ u32 fc_en:1; /* [31:31] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_DSTORE_D_DPORT_FC_TH_DWLEN];
+} __packed;
+#define NBL_DSTORE_D_DPORT_FC_TH_REG(r) (NBL_DSTORE_D_DPORT_FC_TH_ADDR + \
+ (NBL_DSTORE_D_DPORT_FC_TH_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_INIT_DONE_ADDR (0x704800)
+#define NBL_DSTORE_INIT_DONE_DEPTH (1)
+#define NBL_DSTORE_INIT_DONE_WIDTH (32)
+#define NBL_DSTORE_INIT_DONE_DWLEN (1)
+union dstore_init_done_u {
+ struct dstore_init_done {
+ u32 done:1; /* [0] Default:0x0 RO */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_INIT_DONE_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_ADDR (0x70481c)
+#define NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_DEPTH (1)
+#define NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_WIDTH (32)
+#define NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_DWLEN (1)
+union dstore_sch_idle_list_status_curr_u {
+ struct dstore_sch_idle_list_status_curr {
+ u32 empt:1; /* [0] Default:0x0 RO */
+ u32 full:1; /* [1] Default:0x1 RO */
+ u32 cnt:10; /* [11:2] Default:0x200 RO */
+ u32 rsv:20; /* [31:12] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_SCH_QUE_LIST_STATUS_ADDR (0x704820)
+#define NBL_DSTORE_SCH_QUE_LIST_STATUS_DEPTH (48)
+#define NBL_DSTORE_SCH_QUE_LIST_STATUS_WIDTH (32)
+#define NBL_DSTORE_SCH_QUE_LIST_STATUS_DWLEN (1)
+union dstore_sch_que_list_status_u {
+ struct dstore_sch_que_list_status {
+ u32 curr_empt:1; /* [0] Default:0x1 RO */
+ u32 curr_cnt:10; /* [10:1] Default:0x0 RO */
+ u32 history_udf:1; /* [11] Default:0x0 RC */
+ u32 rsv:20; /* [31:12] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_SCH_QUE_LIST_STATUS_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_TOTAL_PKT_ADDR (0x705050)
+#define NBL_DSTORE_RCV_TOTAL_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_TOTAL_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_TOTAL_PKT_DWLEN (1)
+union dstore_rcv_total_pkt_u {
+ struct dstore_rcv_total_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_TOTAL_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_TOTAL_BYTE_ADDR (0x705054)
+#define NBL_DSTORE_RCV_TOTAL_BYTE_DEPTH (1)
+#define NBL_DSTORE_RCV_TOTAL_BYTE_WIDTH (48)
+#define NBL_DSTORE_RCV_TOTAL_BYTE_DWLEN (2)
+union dstore_rcv_total_byte_u {
+ struct dstore_rcv_total_byte {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_TOTAL_BYTE_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_ADDR (0x70505c)
+#define NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_DWLEN (1)
+union dstore_rcv_total_right_pkt_u {
+ struct dstore_rcv_total_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_TOTAL_WRONG_PKT_ADDR (0x705060)
+#define NBL_DSTORE_RCV_TOTAL_WRONG_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_TOTAL_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_TOTAL_WRONG_PKT_DWLEN (1)
+union dstore_rcv_total_wrong_pkt_u {
+ struct dstore_rcv_total_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_TOTAL_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_FWD_RIGHT_PKT_ADDR (0x705064)
+#define NBL_DSTORE_RCV_FWD_RIGHT_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_FWD_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_FWD_RIGHT_PKT_DWLEN (1)
+union dstore_rcv_fwd_right_pkt_u {
+ struct dstore_rcv_fwd_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_FWD_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_FWD_WRONG_PKT_ADDR (0x705068)
+#define NBL_DSTORE_RCV_FWD_WRONG_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_FWD_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_FWD_WRONG_PKT_DWLEN (1)
+union dstore_rcv_fwd_wrong_pkt_u {
+ struct dstore_rcv_fwd_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_FWD_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_HERR_RIGHT_PKT_ADDR (0x70506c)
+#define NBL_DSTORE_RCV_HERR_RIGHT_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_HERR_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_HERR_RIGHT_PKT_DWLEN (1)
+union dstore_rcv_herr_right_pkt_u {
+ struct dstore_rcv_herr_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_HERR_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_HERR_WRONG_PKT_ADDR (0x705070)
+#define NBL_DSTORE_RCV_HERR_WRONG_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_HERR_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_HERR_WRONG_PKT_DWLEN (1)
+union dstore_rcv_herr_wrong_pkt_u {
+ struct dstore_rcv_herr_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_HERR_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_IPRO_TOTAL_PKT_ADDR (0x705074)
+#define NBL_DSTORE_IPRO_TOTAL_PKT_DEPTH (1)
+#define NBL_DSTORE_IPRO_TOTAL_PKT_WIDTH (32)
+#define NBL_DSTORE_IPRO_TOTAL_PKT_DWLEN (1)
+union dstore_ipro_total_pkt_u {
+ struct dstore_ipro_total_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_TOTAL_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_IPRO_TOTAL_BYTE_ADDR (0x705078)
+#define NBL_DSTORE_IPRO_TOTAL_BYTE_DEPTH (1)
+#define NBL_DSTORE_IPRO_TOTAL_BYTE_WIDTH (48)
+#define NBL_DSTORE_IPRO_TOTAL_BYTE_DWLEN (2)
+union dstore_ipro_total_byte_u {
+ struct dstore_ipro_total_byte {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_TOTAL_BYTE_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_IPRO_FWD_RIGHT_PKT_ADDR (0x705080)
+#define NBL_DSTORE_IPRO_FWD_RIGHT_PKT_DEPTH (1)
+#define NBL_DSTORE_IPRO_FWD_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_IPRO_FWD_RIGHT_PKT_DWLEN (1)
+union dstore_ipro_fwd_right_pkt_u {
+ struct dstore_ipro_fwd_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_FWD_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_IPRO_FWD_WRONG_PKT_ADDR (0x705084)
+#define NBL_DSTORE_IPRO_FWD_WRONG_PKT_DEPTH (1)
+#define NBL_DSTORE_IPRO_FWD_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_IPRO_FWD_WRONG_PKT_DWLEN (1)
+union dstore_ipro_fwd_wrong_pkt_u {
+ struct dstore_ipro_fwd_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_FWD_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_IPRO_HERR_RIGHT_PKT_ADDR (0x705088)
+#define NBL_DSTORE_IPRO_HERR_RIGHT_PKT_DEPTH (1)
+#define NBL_DSTORE_IPRO_HERR_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_IPRO_HERR_RIGHT_PKT_DWLEN (1)
+union dstore_ipro_herr_right_pkt_u {
+ struct dstore_ipro_herr_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_HERR_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_IPRO_HERR_WRONG_PKT_ADDR (0x70508c)
+#define NBL_DSTORE_IPRO_HERR_WRONG_PKT_DEPTH (1)
+#define NBL_DSTORE_IPRO_HERR_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_IPRO_HERR_WRONG_PKT_DWLEN (1)
+union dstore_ipro_herr_wrong_pkt_u {
+ struct dstore_ipro_herr_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_HERR_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_PMEM_TOTAL_PKT_ADDR (0x705090)
+#define NBL_DSTORE_PMEM_TOTAL_PKT_DEPTH (1)
+#define NBL_DSTORE_PMEM_TOTAL_PKT_WIDTH (32)
+#define NBL_DSTORE_PMEM_TOTAL_PKT_DWLEN (1)
+union dstore_pmem_total_pkt_u {
+ struct dstore_pmem_total_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_PMEM_TOTAL_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_PMEM_TOTAL_BYTE_ADDR (0x705094)
+#define NBL_DSTORE_PMEM_TOTAL_BYTE_DEPTH (1)
+#define NBL_DSTORE_PMEM_TOTAL_BYTE_WIDTH (48)
+#define NBL_DSTORE_PMEM_TOTAL_BYTE_DWLEN (2)
+union dstore_pmem_total_byte_u {
+ struct dstore_pmem_total_byte {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_PMEM_TOTAL_BYTE_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_ADDR (0x70509c)
+#define NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_DWLEN (1)
+union dstore_rcv_total_err_drop_pkt_u {
+ struct dstore_rcv_total_err_drop_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_TOTAL_SHORT_PKT_ADDR (0x7050a0)
+#define NBL_DSTORE_RCV_TOTAL_SHORT_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_TOTAL_SHORT_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_TOTAL_SHORT_PKT_DWLEN (1)
+union dstore_rcv_total_short_pkt_u {
+ struct dstore_rcv_total_short_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_TOTAL_SHORT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_TOTAL_LONG_PKT_ADDR (0x7050a4)
+#define NBL_DSTORE_RCV_TOTAL_LONG_PKT_DEPTH (1)
+#define NBL_DSTORE_RCV_TOTAL_LONG_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_TOTAL_LONG_PKT_DWLEN (1)
+union dstore_rcv_total_long_pkt_u {
+ struct dstore_rcv_total_long_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_TOTAL_LONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_BUF_TOTAL_DROP_PKT_ADDR (0x7050a8)
+#define NBL_DSTORE_BUF_TOTAL_DROP_PKT_DEPTH (1)
+#define NBL_DSTORE_BUF_TOTAL_DROP_PKT_WIDTH (32)
+#define NBL_DSTORE_BUF_TOTAL_DROP_PKT_DWLEN (1)
+union dstore_buf_total_drop_pkt_u {
+ struct dstore_buf_total_drop_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_BUF_TOTAL_DROP_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_BUF_TOTAL_TRUN_PKT_ADDR (0x7050ac)
+#define NBL_DSTORE_BUF_TOTAL_TRUN_PKT_DEPTH (1)
+#define NBL_DSTORE_BUF_TOTAL_TRUN_PKT_WIDTH (32)
+#define NBL_DSTORE_BUF_TOTAL_TRUN_PKT_DWLEN (1)
+union dstore_buf_total_trun_pkt_u {
+ struct dstore_buf_total_trun_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_BUF_TOTAL_TRUN_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_PORT_PKT_ADDR (0x706000)
+#define NBL_DSTORE_RCV_PORT_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_PKT_DWLEN (1)
+union dstore_rcv_port_pkt_u {
+ struct dstore_rcv_port_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_PKT_DWLEN];
+} __packed;
+#define NBL_DSTORE_RCV_PORT_PKT_REG(r) (NBL_DSTORE_RCV_PORT_PKT_ADDR + \
+ (NBL_DSTORE_RCV_PORT_PKT_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_RCV_PORT_BYTE_ADDR (0x706040)
+#define NBL_DSTORE_RCV_PORT_BYTE_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_BYTE_WIDTH (48)
+#define NBL_DSTORE_RCV_PORT_BYTE_DWLEN (2)
+union dstore_rcv_port_byte_u {
+ struct dstore_rcv_port_byte {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_BYTE_DWLEN];
+} __packed;
+#define NBL_DSTORE_RCV_PORT_BYTE_REG(r) (NBL_DSTORE_RCV_PORT_BYTE_ADDR + \
+ (NBL_DSTORE_RCV_PORT_BYTE_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_ADDR (0x7060c0)
+#define NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_DWLEN (1)
+union dstore_rcv_port_total_right_pkt_u {
+ struct dstore_rcv_port_total_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_ADDR (0x706100)
+#define NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_DWLEN (1)
+union dstore_rcv_port_total_wrong_pkt_u {
+ struct dstore_rcv_port_total_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_ADDR (0x706140)
+#define NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_DWLEN (1)
+union dstore_rcv_port_fwd_right_pkt_u {
+ struct dstore_rcv_port_fwd_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_ADDR (0x706180)
+#define NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_DWLEN (1)
+union dstore_rcv_port_fwd_wrong_pkt_u {
+ struct dstore_rcv_port_fwd_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_ADDR (0x7061c0)
+#define NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_DWLEN (1)
+union dstore_rcv_port_herr_right_pkt_u {
+ struct dstore_rcv_port_herr_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_ADDR (0x706200)
+#define NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_DWLEN (1)
+union dstore_rcv_port_herr_wrong_pkt_u {
+ struct dstore_rcv_port_herr_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_IPRO_PORT_PKT_ADDR (0x706240)
+#define NBL_DSTORE_IPRO_PORT_PKT_DEPTH (12)
+#define NBL_DSTORE_IPRO_PORT_PKT_WIDTH (32)
+#define NBL_DSTORE_IPRO_PORT_PKT_DWLEN (1)
+union dstore_ipro_port_pkt_u {
+ struct dstore_ipro_port_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_PORT_PKT_DWLEN];
+} __packed;
+#define NBL_DSTORE_IPRO_PORT_PKT_REG(r) (NBL_DSTORE_IPRO_PORT_PKT_ADDR + \
+ (NBL_DSTORE_IPRO_PORT_PKT_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_IPRO_PORT_BYTE_ADDR (0x706280)
+#define NBL_DSTORE_IPRO_PORT_BYTE_DEPTH (12)
+#define NBL_DSTORE_IPRO_PORT_BYTE_WIDTH (48)
+#define NBL_DSTORE_IPRO_PORT_BYTE_DWLEN (2)
+union dstore_ipro_port_byte_u {
+ struct dstore_ipro_port_byte {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_PORT_BYTE_DWLEN];
+} __packed;
+#define NBL_DSTORE_IPRO_PORT_BYTE_REG(r) (NBL_DSTORE_IPRO_PORT_BYTE_ADDR + \
+ (NBL_DSTORE_IPRO_PORT_BYTE_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_ADDR (0x706300)
+#define NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_DEPTH (12)
+#define NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_WIDTH (32)
+#define NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_DWLEN (1)
+union dstore_ipro_port_fwd_right_pkt_u {
+ struct dstore_ipro_port_fwd_right_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_ADDR (0x706340)
+#define NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_DEPTH (12)
+#define NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_WIDTH (32)
+#define NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_DWLEN (1)
+union dstore_ipro_port_fwd_wrong_pkt_u {
+ struct dstore_ipro_port_fwd_wrong_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_PMEM_PORT_PKT_ADDR (0x706380)
+#define NBL_DSTORE_PMEM_PORT_PKT_DEPTH (12)
+#define NBL_DSTORE_PMEM_PORT_PKT_WIDTH (32)
+#define NBL_DSTORE_PMEM_PORT_PKT_DWLEN (1)
+union dstore_pmem_port_pkt_u {
+ struct dstore_pmem_port_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_PMEM_PORT_PKT_DWLEN];
+} __packed;
+#define NBL_DSTORE_PMEM_PORT_PKT_REG(r) (NBL_DSTORE_PMEM_PORT_PKT_ADDR + \
+ (NBL_DSTORE_PMEM_PORT_PKT_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_PMEM_PORT_BYTE_ADDR (0x7063c0)
+#define NBL_DSTORE_PMEM_PORT_BYTE_DEPTH (12)
+#define NBL_DSTORE_PMEM_PORT_BYTE_WIDTH (48)
+#define NBL_DSTORE_PMEM_PORT_BYTE_DWLEN (2)
+union dstore_pmem_port_byte_u {
+ struct dstore_pmem_port_byte {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_PMEM_PORT_BYTE_DWLEN];
+} __packed;
+#define NBL_DSTORE_PMEM_PORT_BYTE_REG(r) (NBL_DSTORE_PMEM_PORT_BYTE_ADDR + \
+ (NBL_DSTORE_PMEM_PORT_BYTE_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_ADDR (0x706440)
+#define NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_DWLEN (1)
+union dstore_rcv_err_port_drop_pkt_u {
+ struct dstore_rcv_err_port_drop_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_ADDR (0x706480)
+#define NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_DWLEN (1)
+union dstore_rcv_port_short_drop_pkt_u {
+ struct dstore_rcv_port_short_drop_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_RCV_PORT_LONG_PKT_ADDR (0x7064c0)
+#define NBL_DSTORE_RCV_PORT_LONG_PKT_DEPTH (12)
+#define NBL_DSTORE_RCV_PORT_LONG_PKT_WIDTH (32)
+#define NBL_DSTORE_RCV_PORT_LONG_PKT_DWLEN (1)
+union dstore_rcv_port_long_pkt_u {
+ struct dstore_rcv_port_long_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_RCV_PORT_LONG_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_BUF_PORT_DROP_PKT_ADDR (0x706500)
+#define NBL_DSTORE_BUF_PORT_DROP_PKT_DEPTH (12)
+#define NBL_DSTORE_BUF_PORT_DROP_PKT_WIDTH (32)
+#define NBL_DSTORE_BUF_PORT_DROP_PKT_DWLEN (1)
+union dstore_buf_port_drop_pkt_u {
+ struct dstore_buf_port_drop_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_BUF_PORT_DROP_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_BUF_PORT_TRUN_PKT_ADDR (0x706540)
+#define NBL_DSTORE_BUF_PORT_TRUN_PKT_DEPTH (12)
+#define NBL_DSTORE_BUF_PORT_TRUN_PKT_WIDTH (32)
+#define NBL_DSTORE_BUF_PORT_TRUN_PKT_DWLEN (1)
+union dstore_buf_port_trun_pkt_u {
+ struct dstore_buf_port_trun_pkt {
+ u32 cnt:32; /* [31:0] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_DSTORE_BUF_PORT_TRUN_PKT_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_BP_CUR_1ST_ADDR (0x706580)
+#define NBL_DSTORE_BP_CUR_1ST_DEPTH (1)
+#define NBL_DSTORE_BP_CUR_1ST_WIDTH (32)
+#define NBL_DSTORE_BP_CUR_1ST_DWLEN (1)
+union dstore_bp_cur_1st_u {
+ struct dstore_bp_cur_1st {
+ u32 link_fc:6; /* [5:0] Default:0x0 RO */
+ u32 rsv:2; /* [7:6] Default:0x0 RO */
+ u32 pfc:24; /* [31:8] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_BP_CUR_1ST_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_BP_CUR_2ND_ADDR (0x706584)
+#define NBL_DSTORE_BP_CUR_2ND_DEPTH (1)
+#define NBL_DSTORE_BP_CUR_2ND_WIDTH (32)
+#define NBL_DSTORE_BP_CUR_2ND_DWLEN (1)
+union dstore_bp_cur_2nd_u {
+ struct dstore_bp_cur_2nd {
+ u32 pfc:24; /* [23:0] Default:0x0 RO */
+ u32 rsv:8; /* [31:24] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_BP_CUR_2ND_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_BP_HISTORY_LINK_ADDR (0x706590)
+#define NBL_DSTORE_BP_HISTORY_LINK_DEPTH (6)
+#define NBL_DSTORE_BP_HISTORY_LINK_WIDTH (32)
+#define NBL_DSTORE_BP_HISTORY_LINK_DWLEN (1)
+union dstore_bp_history_link_u {
+ struct dstore_bp_history_link {
+ u32 fc:1; /* [0] Default:0x0 RC */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_BP_HISTORY_LINK_DWLEN];
+} __packed;
+#define NBL_DSTORE_BP_HISTORY_LINK_REG(r) (NBL_DSTORE_BP_HISTORY_LINK_ADDR + \
+ (NBL_DSTORE_BP_HISTORY_LINK_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_BP_HISTORY_ADDR (0x7065b0)
+#define NBL_DSTORE_BP_HISTORY_DEPTH (48)
+#define NBL_DSTORE_BP_HISTORY_WIDTH (32)
+#define NBL_DSTORE_BP_HISTORY_DWLEN (1)
+union dstore_bp_history_u {
+ struct dstore_bp_history {
+ u32 pfc:1; /* [0] Default:0x0 RC */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_BP_HISTORY_DWLEN];
+} __packed;
+#define NBL_DSTORE_BP_HISTORY_REG(r) (NBL_DSTORE_BP_HISTORY_ADDR + \
+ (NBL_DSTORE_BP_HISTORY_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_WRR_CUR_ADDR (0x706800)
+#define NBL_DSTORE_WRR_CUR_DEPTH (36)
+#define NBL_DSTORE_WRR_CUR_WIDTH (32)
+#define NBL_DSTORE_WRR_CUR_DWLEN (1)
+union dstore_wrr_cur_u {
+ struct dstore_wrr_cur {
+ u32 wgt_cos:5; /* [4:0] Default:0x0 RO */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_WRR_CUR_DWLEN];
+} __packed;
+#define NBL_DSTORE_WRR_CUR_REG(r) (NBL_DSTORE_WRR_CUR_ADDR + \
+ (NBL_DSTORE_WRR_CUR_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_DDPORT_CUR_ADDR (0x707018)
+#define NBL_DSTORE_DDPORT_CUR_DEPTH (1)
+#define NBL_DSTORE_DDPORT_CUR_WIDTH (32)
+#define NBL_DSTORE_DDPORT_CUR_DWLEN (1)
+union dstore_ddport_cur_u {
+ struct dstore_ddport_cur {
+ u32 link_fc:5; /* [4:0] Default:0x0 RO */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_DDPORT_CUR_DWLEN];
+} __packed;
+
+#define NBL_DSTORE_DDPORT_HISTORY_ADDR (0x70701c)
+#define NBL_DSTORE_DDPORT_HISTORY_DEPTH (5)
+#define NBL_DSTORE_DDPORT_HISTORY_WIDTH (32)
+#define NBL_DSTORE_DDPORT_HISTORY_DWLEN (1)
+union dstore_ddport_history_u {
+ struct dstore_ddport_history {
+ u32 link_fc:1; /* [0] Default:0x0 RC */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_DDPORT_HISTORY_DWLEN];
+} __packed;
+#define NBL_DSTORE_DDPORT_HISTORY_REG(r) (NBL_DSTORE_DDPORT_HISTORY_ADDR + \
+ (NBL_DSTORE_DDPORT_HISTORY_DWLEN * 4) * (r))
+
+#define NBL_DSTORE_DDPORT_RSC_ADD_ADDR (0x707050)
+#define NBL_DSTORE_DDPORT_RSC_ADD_DEPTH (5)
+#define NBL_DSTORE_DDPORT_RSC_ADD_WIDTH (32)
+#define NBL_DSTORE_DDPORT_RSC_ADD_DWLEN (1)
+union dstore_ddport_rsc_add_u {
+ struct dstore_ddport_rsc_add {
+ u32 cnt:12; /* [11:0] Default:0x0 RO */
+ u32 rsv:20; /* [31:12] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_DSTORE_DDPORT_RSC_ADD_DWLEN];
+} __packed;
+#define NBL_DSTORE_DDPORT_RSC_ADD_REG(r) (NBL_DSTORE_DDPORT_RSC_ADD_ADDR + \
+ (NBL_DSTORE_DDPORT_RSC_ADD_DWLEN * 4) * (r))
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h
new file mode 100644
index 000000000000..3504c272c4d4
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h
@@ -0,0 +1,414 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+// Code generated by interstellar. DO NOT EDIT.
+// Compatible with leonis RTL tag 0710
+
+#ifndef NBL_UCAR_H
+#define NBL_UCAR_H 1
+
+#include <linux/types.h>
+
+#define NBL_UCAR_BASE (0x00E84000)
+
+#define NBL_UCAR_INT_STATUS_ADDR (0xe84000)
+#define NBL_UCAR_INT_STATUS_DEPTH (1)
+#define NBL_UCAR_INT_STATUS_WIDTH (32)
+#define NBL_UCAR_INT_STATUS_DWLEN (1)
+union ucar_int_status_u {
+ struct ucar_int_status {
+ u32 color_err:1; /* [0] Default:0x0 RWC */
+ u32 parity_err:1; /* [1] Default:0x0 RWC */
+ u32 fifo_uflw_err:1; /* [2] Default:0x0 RWC */
+ u32 cif_err:1; /* [3] Default:0x0 RWC */
+ u32 fifo_dflw_err:1; /* [4] Default:0x0 RWC */
+ u32 atid_nomat_err:1; /* [5] Default:0x0 RWC */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_INT_STATUS_DWLEN];
+} __packed;
+
+#define NBL_UCAR_INT_MASK_ADDR (0xe84004)
+#define NBL_UCAR_INT_MASK_DEPTH (1)
+#define NBL_UCAR_INT_MASK_WIDTH (32)
+#define NBL_UCAR_INT_MASK_DWLEN (1)
+union ucar_int_mask_u {
+ struct ucar_int_mask {
+ u32 color_err:1; /* [0] Default:0x1 RW */
+ u32 parity_err:1; /* [1] Default:0x0 RW */
+ u32 fifo_uflw_err:1; /* [2] Default:0x0 RW */
+ u32 cif_err:1; /* [3] Default:0x0 RW */
+ u32 fifo_dflw_err:1; /* [4] Default:0x0 RW */
+ u32 atid_nomat_err:1; /* [5] Default:0x1 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_INT_MASK_DWLEN];
+} __packed;
+
+#define NBL_UCAR_INT_SET_ADDR (0xe84008)
+#define NBL_UCAR_INT_SET_DEPTH (1)
+#define NBL_UCAR_INT_SET_WIDTH (32)
+#define NBL_UCAR_INT_SET_DWLEN (1)
+union ucar_int_set_u {
+ struct ucar_int_set {
+ u32 color_err:1; /* [0] Default:0x0 WO */
+ u32 parity_err:1; /* [1] Default:0x0 WO */
+ u32 fifo_uflw_err:1; /* [2] Default:0x0 WO */
+ u32 cif_err:1; /* [3] Default:0x0 WO */
+ u32 fifo_dflw_err:1; /* [4] Default:0x0 WO */
+ u32 atid_nomat_err:1; /* [5] Default:0x0 WO */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_INT_SET_DWLEN];
+} __packed;
+
+#define NBL_UCAR_PARITY_ERR_INFO_ADDR (0xe84104)
+#define NBL_UCAR_PARITY_ERR_INFO_DEPTH (1)
+#define NBL_UCAR_PARITY_ERR_INFO_WIDTH (32)
+#define NBL_UCAR_PARITY_ERR_INFO_DWLEN (1)
+union ucar_parity_err_info_u {
+ struct ucar_parity_err_info {
+ u32 ram_addr:12; /* [11:0] Default:0x0 RO */
+ u32 ram_id:3; /* [14:12] Default:0x0 RO */
+ u32 rsv:17; /* [31:15] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_PARITY_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_UCAR_CIF_ERR_INFO_ADDR (0xe8411c)
+#define NBL_UCAR_CIF_ERR_INFO_DEPTH (1)
+#define NBL_UCAR_CIF_ERR_INFO_WIDTH (32)
+#define NBL_UCAR_CIF_ERR_INFO_DWLEN (1)
+union ucar_cif_err_info_u {
+ struct ucar_cif_err_info {
+ u32 addr:30; /* [29:0] Default:0x0 RO */
+ u32 wr_err:1; /* [30] Default:0x0 RO */
+ u32 ucor_err:1; /* [31] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_CIF_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_UCAR_ATID_NOMAT_ERR_INFO_ADDR (0xe84134)
+#define NBL_UCAR_ATID_NOMAT_ERR_INFO_DEPTH (1)
+#define NBL_UCAR_ATID_NOMAT_ERR_INFO_WIDTH (32)
+#define NBL_UCAR_ATID_NOMAT_ERR_INFO_DWLEN (1)
+union ucar_atid_nomat_err_info_u {
+ struct ucar_atid_nomat_err_info {
+ u32 id:2; /* [1:0] Default:0x0 RO */
+ u32 rsv:30; /* [31:2] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_ATID_NOMAT_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_UCAR_CAR_CTRL_ADDR (0xe84200)
+#define NBL_UCAR_CAR_CTRL_DEPTH (1)
+#define NBL_UCAR_CAR_CTRL_WIDTH (32)
+#define NBL_UCAR_CAR_CTRL_DWLEN (1)
+union ucar_car_ctrl_u {
+ struct ucar_car_ctrl {
+ u32 sctr_car:1; /* [0] Default:0x1 RW */
+ u32 rctr_car:1; /* [1] Default:0x1 RW */
+ u32 rc_car:1; /* [2] Default:0x1 RW */
+ u32 tbl_rc_car:1; /* [3] Default:0x1 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_CAR_CTRL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_INIT_START_ADDR (0xe84204)
+#define NBL_UCAR_INIT_START_DEPTH (1)
+#define NBL_UCAR_INIT_START_WIDTH (32)
+#define NBL_UCAR_INIT_START_DWLEN (1)
+union ucar_init_start_u {
+ struct ucar_init_start {
+ u32 start:1; /* [0] Default:0x0 WO */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_INIT_START_DWLEN];
+} __packed;
+
+#define NBL_UCAR_FWD_CARID_ADDR (0xe84210)
+#define NBL_UCAR_FWD_CARID_DEPTH (1)
+#define NBL_UCAR_FWD_CARID_WIDTH (32)
+#define NBL_UCAR_FWD_CARID_DWLEN (1)
+union ucar_fwd_carid_u {
+ struct ucar_fwd_carid {
+ u32 act_id:6; /* [5:0] Default:0x5 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_FWD_CARID_DWLEN];
+} __packed;
+
+#define NBL_UCAR_FWD_FLOW_CAR_ADDR (0xe84214)
+#define NBL_UCAR_FWD_FLOW_CAR_DEPTH (1)
+#define NBL_UCAR_FWD_FLOW_CAR_WIDTH (32)
+#define NBL_UCAR_FWD_FLOW_CAR_DWLEN (1)
+union ucar_fwd_flow_car_u {
+ struct ucar_fwd_flow_car {
+ u32 act_id:6; /* [5:0] Default:0x6 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_FWD_FLOW_CAR_DWLEN];
+} __packed;
+
+#define NBL_UCAR_PBS_SUB_ADDR (0xe84224)
+#define NBL_UCAR_PBS_SUB_DEPTH (1)
+#define NBL_UCAR_PBS_SUB_WIDTH (32)
+#define NBL_UCAR_PBS_SUB_DWLEN (1)
+union ucar_pbs_sub_u {
+ struct ucar_pbs_sub {
+ u32 sel:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_PBS_SUB_DWLEN];
+} __packed;
+
+#define NBL_UCAR_FLOW_TIMMING_ADD_ADDR (0xe84400)
+#define NBL_UCAR_FLOW_TIMMING_ADD_DEPTH (1)
+#define NBL_UCAR_FLOW_TIMMING_ADD_WIDTH (32)
+#define NBL_UCAR_FLOW_TIMMING_ADD_DWLEN (1)
+union ucar_flow_timming_add_u {
+ struct ucar_flow_timming_add {
+ u32 cycle_max:12; /* [11:0] Default:0x4 RW */
+ u32 rsv1:4; /* [15:12] Default:0x0 RO */
+ u32 depth:14; /* [29:16] Default:0x4B0 RW */
+ u32 rsv:2; /* [31:30] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_FLOW_TIMMING_ADD_DWLEN];
+} __packed;
+
+#define NBL_UCAR_FLOW_4K_TIMMING_ADD_ADDR (0xe84404)
+#define NBL_UCAR_FLOW_4K_TIMMING_ADD_DEPTH (1)
+#define NBL_UCAR_FLOW_4K_TIMMING_ADD_WIDTH (32)
+#define NBL_UCAR_FLOW_4K_TIMMING_ADD_DWLEN (1)
+union ucar_flow_4k_timming_add_u {
+ struct ucar_flow_4k_timming_add {
+ u32 cycle_max:12; /* [11:0] Default:0x4 RW */
+ u32 depth:18; /* [29:12] Default:0x12C0 RW */
+ u32 rsv:2; /* [31:30] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_FLOW_4K_TIMMING_ADD_DWLEN];
+} __packed;
+
+#define NBL_UCAR_INIT_DONE_ADDR (0xe84408)
+#define NBL_UCAR_INIT_DONE_DEPTH (1)
+#define NBL_UCAR_INIT_DONE_WIDTH (32)
+#define NBL_UCAR_INIT_DONE_DWLEN (1)
+union ucar_init_done_u {
+ struct ucar_init_done {
+ u32 done:1; /* [0] Default:0x0 RO */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_INIT_DONE_DWLEN];
+} __packed;
+
+#define NBL_UCAR_INPUT_CELL_ADDR (0xe8441c)
+#define NBL_UCAR_INPUT_CELL_DEPTH (1)
+#define NBL_UCAR_INPUT_CELL_WIDTH (32)
+#define NBL_UCAR_INPUT_CELL_DWLEN (1)
+union ucar_input_cell_u {
+ struct ucar_input_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_INPUT_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_RD_CELL_ADDR (0xe84420)
+#define NBL_UCAR_RD_CELL_DEPTH (1)
+#define NBL_UCAR_RD_CELL_WIDTH (32)
+#define NBL_UCAR_RD_CELL_DWLEN (1)
+union ucar_rd_cell_u {
+ struct ucar_rd_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_RD_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_CAR_CELL_ADDR (0xe84424)
+#define NBL_UCAR_CAR_CELL_DEPTH (1)
+#define NBL_UCAR_CAR_CELL_WIDTH (32)
+#define NBL_UCAR_CAR_CELL_DWLEN (1)
+union ucar_car_cell_u {
+ struct ucar_car_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_CAR_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_CAR_FLOW_CELL_ADDR (0xe84428)
+#define NBL_UCAR_CAR_FLOW_CELL_DEPTH (1)
+#define NBL_UCAR_CAR_FLOW_CELL_WIDTH (32)
+#define NBL_UCAR_CAR_FLOW_CELL_DWLEN (1)
+union ucar_car_flow_cell_u {
+ struct ucar_car_flow_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_CAR_FLOW_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_CAR_FLOW_4K_CELL_ADDR (0xe8442c)
+#define NBL_UCAR_CAR_FLOW_4K_CELL_DEPTH (1)
+#define NBL_UCAR_CAR_FLOW_4K_CELL_WIDTH (32)
+#define NBL_UCAR_CAR_FLOW_4K_CELL_DWLEN (1)
+union ucar_car_flow_4k_cell_u {
+ struct ucar_car_flow_4k_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_CAR_FLOW_4K_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_NOCAR_CELL_ADDR (0xe84430)
+#define NBL_UCAR_NOCAR_CELL_DEPTH (1)
+#define NBL_UCAR_NOCAR_CELL_WIDTH (32)
+#define NBL_UCAR_NOCAR_CELL_DWLEN (1)
+union ucar_nocar_cell_u {
+ struct ucar_nocar_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_NOCAR_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_NOCAR_ERR_ADDR (0xe84434)
+#define NBL_UCAR_NOCAR_ERR_DEPTH (1)
+#define NBL_UCAR_NOCAR_ERR_WIDTH (32)
+#define NBL_UCAR_NOCAR_ERR_DWLEN (1)
+union ucar_nocar_err_u {
+ struct ucar_nocar_err {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_NOCAR_ERR_DWLEN];
+} __packed;
+
+#define NBL_UCAR_GREEN_CELL_ADDR (0xe84438)
+#define NBL_UCAR_GREEN_CELL_DEPTH (1)
+#define NBL_UCAR_GREEN_CELL_WIDTH (32)
+#define NBL_UCAR_GREEN_CELL_DWLEN (1)
+union ucar_green_cell_u {
+ struct ucar_green_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_GREEN_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_YELLOW_CELL_ADDR (0xe8443c)
+#define NBL_UCAR_YELLOW_CELL_DEPTH (1)
+#define NBL_UCAR_YELLOW_CELL_WIDTH (32)
+#define NBL_UCAR_YELLOW_CELL_DWLEN (1)
+union ucar_yellow_cell_u {
+ struct ucar_yellow_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_YELLOW_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_RED_CELL_ADDR (0xe84440)
+#define NBL_UCAR_RED_CELL_DEPTH (1)
+#define NBL_UCAR_RED_CELL_WIDTH (32)
+#define NBL_UCAR_RED_CELL_DWLEN (1)
+union ucar_red_cell_u {
+ struct ucar_red_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_RED_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_NOCAR_PKT_ADDR (0xe84444)
+#define NBL_UCAR_NOCAR_PKT_DEPTH (1)
+#define NBL_UCAR_NOCAR_PKT_WIDTH (48)
+#define NBL_UCAR_NOCAR_PKT_DWLEN (2)
+union ucar_nocar_pkt_u {
+ struct ucar_nocar_pkt {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_NOCAR_PKT_DWLEN];
+} __packed;
+
+#define NBL_UCAR_GREEN_PKT_ADDR (0xe8444c)
+#define NBL_UCAR_GREEN_PKT_DEPTH (1)
+#define NBL_UCAR_GREEN_PKT_WIDTH (48)
+#define NBL_UCAR_GREEN_PKT_DWLEN (2)
+union ucar_green_pkt_u {
+ struct ucar_green_pkt {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_GREEN_PKT_DWLEN];
+} __packed;
+
+#define NBL_UCAR_YELLOW_PKT_ADDR (0xe84454)
+#define NBL_UCAR_YELLOW_PKT_DEPTH (1)
+#define NBL_UCAR_YELLOW_PKT_WIDTH (48)
+#define NBL_UCAR_YELLOW_PKT_DWLEN (2)
+union ucar_yellow_pkt_u {
+ struct ucar_yellow_pkt {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_YELLOW_PKT_DWLEN];
+} __packed;
+
+#define NBL_UCAR_RED_PKT_ADDR (0xe8445c)
+#define NBL_UCAR_RED_PKT_DEPTH (1)
+#define NBL_UCAR_RED_PKT_WIDTH (48)
+#define NBL_UCAR_RED_PKT_DWLEN (2)
+union ucar_red_pkt_u {
+ struct ucar_red_pkt {
+ u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */
+ u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_RED_PKT_DWLEN];
+} __packed;
+
+#define NBL_UCAR_FWD_TYPE_WRONG_CELL_ADDR (0xe84464)
+#define NBL_UCAR_FWD_TYPE_WRONG_CELL_DEPTH (1)
+#define NBL_UCAR_FWD_TYPE_WRONG_CELL_WIDTH (32)
+#define NBL_UCAR_FWD_TYPE_WRONG_CELL_DWLEN (1)
+union ucar_fwd_type_wrong_cell_u {
+ struct ucar_fwd_type_wrong_cell {
+ u32 cnt:32; /* [31:0] Default:0x0 RCTR */
+ } __packed info;
+ u32 data[NBL_UCAR_FWD_TYPE_WRONG_CELL_DWLEN];
+} __packed;
+
+#define NBL_UCAR_FLOW_ADDR (0xe88000)
+#define NBL_UCAR_FLOW_DEPTH (1024)
+#define NBL_UCAR_FLOW_WIDTH (128)
+#define NBL_UCAR_FLOW_DWLEN (4)
+union ucar_flow_u {
+ struct ucar_flow {
+ u32 valid:1; /* [0] Default:0x0 RW */
+ u32 depth:19; /* [19:1] Default:0x0 RW */
+ u32 cir:19; /* [38:20] Default:0x0 RW */
+ u32 pir:19; /* [57:39] Default:0x0 RW */
+ u32 cbs:21; /* [78:58] Default:0x0 RW */
+ u32 pbs:21; /* [99:79] Default:0x0 RW */
+ u32 rsv:28; /* [127:100] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_FLOW_DWLEN];
+} __packed;
+#define NBL_UCAR_FLOW_REG(r) (NBL_UCAR_FLOW_ADDR + \
+ (NBL_UCAR_FLOW_DWLEN * 4) * (r))
+
+#define NBL_UCAR_FLOW_4K_ADDR (0xe94000)
+#define NBL_UCAR_FLOW_4K_DEPTH (4096)
+#define NBL_UCAR_FLOW_4K_WIDTH (128)
+#define NBL_UCAR_FLOW_4K_DWLEN (4)
+union ucar_flow_4k_u {
+ struct ucar_flow_4k {
+ u32 valid:1; /* [0] Default:0x0 RW */
+ u32 depth:21; /* [21:1] Default:0x0 RW */
+ u32 cir:21; /* [42:22] Default:0x0 RW */
+ u32 pir:21; /* [63:43] Default:0x0 RW */
+ u32 cbs:23; /* [86:64] Default:0x0 RW */
+ u32 pbs:23; /* [109:87] Default:0x0 RW */
+ u32 rsv:18; /* [127:110] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_UCAR_FLOW_4K_DWLEN];
+} __packed;
+#define NBL_UCAR_FLOW_4K_REG(r) (NBL_UCAR_FLOW_4K_ADDR + \
+ (NBL_UCAR_FLOW_4K_DWLEN * 4) * (r))
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h
new file mode 100644
index 000000000000..47bda61dbf97
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+// Code generated by interstellar. DO NOT EDIT.
+// Compatible with leonis RTL tag 0710
+
+#include "nbl_ppe_ipro.h"
+#include "nbl_ppe_epro.h"
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h
new file mode 100644
index 000000000000..7c36f4ad11b4
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h
@@ -0,0 +1,665 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+// Code generated by interstellar. DO NOT EDIT.
+// Compatible with leonis RTL tag 0710
+
+#ifndef NBL_EPRO_H
+#define NBL_EPRO_H 1
+
+#include <linux/types.h>
+
+#define NBL_EPRO_BASE (0x00E74000)
+
+#define NBL_EPRO_INT_STATUS_ADDR (0xe74000)
+#define NBL_EPRO_INT_STATUS_DEPTH (1)
+#define NBL_EPRO_INT_STATUS_WIDTH (32)
+#define NBL_EPRO_INT_STATUS_DWLEN (1)
+union epro_int_status_u {
+ struct epro_int_status {
+ u32 fatal_err:1; /* [0] Default:0x0 RWC */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 RWC */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 RWC */
+ u32 cif_err:1; /* [3] Default:0x0 RWC */
+ u32 input_err:1; /* [4] Default:0x0 RWC */
+ u32 cfg_err:1; /* [5] Default:0x0 RWC */
+ u32 data_ucor_err:1; /* [6] Default:0x0 RWC */
+ u32 data_cor_err:1; /* [7] Default:0x0 RWC */
+ u32 rsv:24; /* [31:8] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_INT_STATUS_DWLEN];
+} __packed;
+
+#define NBL_EPRO_INT_MASK_ADDR (0xe74004)
+#define NBL_EPRO_INT_MASK_DEPTH (1)
+#define NBL_EPRO_INT_MASK_WIDTH (32)
+#define NBL_EPRO_INT_MASK_DWLEN (1)
+union epro_int_mask_u {
+ struct epro_int_mask {
+ u32 fatal_err:1; /* [0] Default:0x0 RW */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 RW */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 RW */
+ u32 cif_err:1; /* [3] Default:0x0 RW */
+ u32 input_err:1; /* [4] Default:0x0 RW */
+ u32 cfg_err:1; /* [5] Default:0x0 RW */
+ u32 data_ucor_err:1; /* [6] Default:0x0 RW */
+ u32 data_cor_err:1; /* [7] Default:0x0 RW */
+ u32 rsv:24; /* [31:8] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_INT_MASK_DWLEN];
+} __packed;
+
+#define NBL_EPRO_INT_SET_ADDR (0xe74008)
+#define NBL_EPRO_INT_SET_DEPTH (1)
+#define NBL_EPRO_INT_SET_WIDTH (32)
+#define NBL_EPRO_INT_SET_DWLEN (1)
+union epro_int_set_u {
+ struct epro_int_set {
+ u32 fatal_err:1; /* [0] Default:0x0 WO */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 WO */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 WO */
+ u32 cif_err:1; /* [3] Default:0x0 WO */
+ u32 input_err:1; /* [4] Default:0x0 WO */
+ u32 cfg_err:1; /* [5] Default:0x0 WO */
+ u32 data_ucor_err:1; /* [6] Default:0x0 WO */
+ u32 data_cor_err:1; /* [7] Default:0x0 WO */
+ u32 rsv:24; /* [31:8] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_INT_SET_DWLEN];
+} __packed;
+
+#define NBL_EPRO_INIT_DONE_ADDR (0xe7400c)
+#define NBL_EPRO_INIT_DONE_DEPTH (1)
+#define NBL_EPRO_INIT_DONE_WIDTH (32)
+#define NBL_EPRO_INIT_DONE_DWLEN (1)
+union epro_init_done_u {
+ struct epro_init_done {
+ u32 done:1; /* [0] Default:0x0 RO */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_INIT_DONE_DWLEN];
+} __packed;
+
+#define NBL_EPRO_CIF_ERR_INFO_ADDR (0xe74040)
+#define NBL_EPRO_CIF_ERR_INFO_DEPTH (1)
+#define NBL_EPRO_CIF_ERR_INFO_WIDTH (32)
+#define NBL_EPRO_CIF_ERR_INFO_DWLEN (1)
+union epro_cif_err_info_u {
+ struct epro_cif_err_info {
+ u32 addr:30; /* [29:0] Default:0x0 RO */
+ u32 wr_err:1; /* [30] Default:0x0 RO */
+ u32 ucor_err:1; /* [31] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_CIF_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_EPRO_CFG_ERR_INFO_ADDR (0xe74050)
+#define NBL_EPRO_CFG_ERR_INFO_DEPTH (1)
+#define NBL_EPRO_CFG_ERR_INFO_WIDTH (32)
+#define NBL_EPRO_CFG_ERR_INFO_DWLEN (1)
+union epro_cfg_err_info_u {
+ struct epro_cfg_err_info {
+ u32 addr:10; /* [9:0] Default:0x0 RO */
+ u32 id:3; /* [12:10] Default:0x0 RO */
+ u32 rsv:19; /* [31:13] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_CFG_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_EPRO_CAR_CTRL_ADDR (0xe74100)
+#define NBL_EPRO_CAR_CTRL_DEPTH (1)
+#define NBL_EPRO_CAR_CTRL_WIDTH (32)
+#define NBL_EPRO_CAR_CTRL_DWLEN (1)
+union epro_car_ctrl_u {
+ struct epro_car_ctrl {
+ u32 sctr_car:1; /* [0] Default:0x1 RW */
+ u32 rctr_car:1; /* [1] Default:0x1 RW */
+ u32 rc_car:1; /* [2] Default:0x1 RW */
+ u32 tbl_rc_car:1; /* [3] Default:0x1 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_CAR_CTRL_DWLEN];
+} __packed;
+
+#define NBL_EPRO_INIT_START_ADDR (0xe74180)
+#define NBL_EPRO_INIT_START_DEPTH (1)
+#define NBL_EPRO_INIT_START_WIDTH (32)
+#define NBL_EPRO_INIT_START_DWLEN (1)
+union epro_init_start_u {
+ struct epro_init_start {
+ u32 start:1; /* [0] Default:0x0 WO */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_INIT_START_DWLEN];
+} __packed;
+
+#define NBL_EPRO_FLAG_SEL_ADDR (0xe74200)
+#define NBL_EPRO_FLAG_SEL_DEPTH (1)
+#define NBL_EPRO_FLAG_SEL_WIDTH (32)
+#define NBL_EPRO_FLAG_SEL_DWLEN (1)
+union epro_flag_sel_u {
+ struct epro_flag_sel {
+ u32 dir_offset_en:1; /* [0] Default:0x1 RW */
+ u32 dir_offset:5; /* [5:1] Default:0x0 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_FLAG_SEL_DWLEN];
+} __packed;
+
+#define NBL_EPRO_ACT_SEL_EN_ADDR (0xe74214)
+#define NBL_EPRO_ACT_SEL_EN_DEPTH (1)
+#define NBL_EPRO_ACT_SEL_EN_WIDTH (32)
+#define NBL_EPRO_ACT_SEL_EN_DWLEN (1)
+union epro_act_sel_en_u {
+ struct epro_act_sel_en {
+ u32 rssidx_en:1; /* [0] Default:0x1 RW */
+ u32 dport_en:1; /* [1] Default:0x1 RW */
+ u32 mirroridx_en:1; /* [2] Default:0x1 RW */
+ u32 dqueue_en:1; /* [3] Default:0x1 RW */
+ u32 encap_en:1; /* [4] Default:0x1 RW */
+ u32 pop_8021q_en:1; /* [5] Default:0x1 RW */
+ u32 pop_qinq_en:1; /* [6] Default:0x1 RW */
+ u32 push_cvlan_en:1; /* [7] Default:0x1 RW */
+ u32 push_svlan_en:1; /* [8] Default:0x1 RW */
+ u32 replace_cvlan_en:1; /* [9] Default:0x1 RW */
+ u32 replace_svlan_en:1; /* [10] Default:0x1 RW */
+ u32 rsv:21; /* [31:11] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_ACT_SEL_EN_DWLEN];
+} __packed;
+
+#define NBL_EPRO_AM_ACT_ID0_ADDR (0xe74218)
+#define NBL_EPRO_AM_ACT_ID0_DEPTH (1)
+#define NBL_EPRO_AM_ACT_ID0_WIDTH (32)
+#define NBL_EPRO_AM_ACT_ID0_DWLEN (1)
+union epro_am_act_id0_u {
+ struct epro_am_act_id0 {
+ u32 replace_cvlan:6; /* [5:0] Default:0x2b RW */
+ u32 rsv3:2; /* [7:6] Default:0x0 RO */
+ u32 replace_svlan:6; /* [13:8] Default:0x2a RW */
+ u32 rsv2:2; /* [15:14] Default:0x0 RO */
+ u32 push_cvlan:6; /* [21:16] Default:0x2d RW */
+ u32 rsv1:2; /* [23:22] Default:0x0 RO */
+ u32 push_svlan:6; /* [29:24] Default:0x2c RW */
+ u32 rsv:2; /* [31:30] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_AM_ACT_ID0_DWLEN];
+} __packed;
+
+#define NBL_EPRO_AM_ACT_ID1_ADDR (0xe7421c)
+#define NBL_EPRO_AM_ACT_ID1_DEPTH (1)
+#define NBL_EPRO_AM_ACT_ID1_WIDTH (32)
+#define NBL_EPRO_AM_ACT_ID1_DWLEN (1)
+union epro_am_act_id1_u {
+ struct epro_am_act_id1 {
+ u32 pop_qinq:6; /* [5:0] Default:0x29 RW */
+ u32 rsv3:2; /* [7:6] Default:0x0 RO */
+ u32 pop_8021q:6; /* [13:08] Default:0x28 RW */
+ u32 rsv2:2; /* [15:14] Default:0x0 RO */
+ u32 dport:6; /* [21:16] Default:0x9 RW */
+ u32 rsv1:2; /* [23:22] Default:0x0 RO */
+ u32 dqueue:6; /* [29:24] Default:0xa RW */
+ u32 rsv:2; /* [31:30] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_AM_ACT_ID1_DWLEN];
+} __packed;
+
+#define NBL_EPRO_AM_ACT_ID2_ADDR (0xe74220)
+#define NBL_EPRO_AM_ACT_ID2_DEPTH (1)
+#define NBL_EPRO_AM_ACT_ID2_WIDTH (32)
+#define NBL_EPRO_AM_ACT_ID2_DWLEN (1)
+union epro_am_act_id2_u {
+ struct epro_am_act_id2 {
+ u32 rssidx:6; /* [5:0] Default:0x4 RW */
+ u32 rsv3:2; /* [7:6] Default:0x0 RO */
+ u32 mirroridx:6; /* [13:8] Default:0x8 RW */
+ u32 rsv2:2; /* [15:14] Default:0x0 RO */
+ u32 car:6; /* [21:16] Default:0x5 RW */
+ u32 rsv1:2; /* [23:22] Default:0x0 RO */
+ u32 encap:6; /* [29:24] Default:0x2e RW */
+ u32 rsv:2; /* [31:30] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_AM_ACT_ID2_DWLEN];
+} __packed;
+
+#define NBL_EPRO_AM_ACT_ID3_ADDR (0xe74224)
+#define NBL_EPRO_AM_ACT_ID3_DEPTH (1)
+#define NBL_EPRO_AM_ACT_ID3_WIDTH (32)
+#define NBL_EPRO_AM_ACT_ID3_DWLEN (1)
+union epro_am_act_id3_u {
+ struct epro_am_act_id3 {
+ u32 outer_sport_mdf:6; /* [5:0] Default:0x30 RW */
+ u32 rsv3:2; /* [7:6] Default:0x0 RO */
+ u32 pri_mdf:6; /* [13:8] Default:0x15 RW */
+ u32 rsv2:2; /* [15:14] Default:0x0 RO */
+ u32 dp_hash0:6; /* [21:16] Default:0x13 RW */
+ u32 rsv1:2; /* [23:22] Default:0x0 RO */
+ u32 dp_hash1:6; /* [29:24] Default:0x14 RW */
+ u32 rsv:2; /* [31:30] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_AM_ACT_ID3_DWLEN];
+} __packed;
+
+#define NBL_EPRO_ACTION_PRIORITY_ADDR (0xe74230)
+#define NBL_EPRO_ACTION_PRIORITY_DEPTH (1)
+#define NBL_EPRO_ACTION_PRIORITY_WIDTH (32)
+#define NBL_EPRO_ACTION_PRIORITY_DWLEN (1)
+union epro_action_priority_u {
+ struct epro_action_priority {
+ u32 mirroridx:2; /* [1:0] Default:0x0 RW */
+ u32 car:2; /* [3:2] Default:0x0 RW */
+ u32 dqueue:2; /* [5:4] Default:0x0 RW */
+ u32 dport:2; /* [7:6] Default:0x0 RW */
+ u32 pop_8021q:2; /* [9:8] Default:0x0 RW */
+ u32 pop_qinq:2; /* [11:10] Default:0x0 RW */
+ u32 replace_inner_vlan:2; /* [13:12] Default:0x0 RW */
+ u32 replace_outer_vlan:2; /* [15:14] Default:0x0 RW */
+ u32 push_inner_vlan:2; /* [17:16] Default:0x0 RW */
+ u32 push_outer_vlan:2; /* [19:18] Default:0x0 RW */
+ u32 outer_sport_mdf:2; /* [21:20] Default:0x0 RW */
+ u32 pri_mdf:2; /* [23:22] Default:0x0 RW */
+ u32 dp_hash0:2; /* [25:24] Default:0x0 RW */
+ u32 dp_hash1:2; /* [27:26] Default:0x0 RW */
+ u32 rsv:4; /* [31:28] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_ACTION_PRIORITY_DWLEN];
+} __packed;
+
+#define NBL_EPRO_MIRROR_ACTION_PRIORITY_ADDR (0xe74234)
+#define NBL_EPRO_MIRROR_ACTION_PRIORITY_DEPTH (1)
+#define NBL_EPRO_MIRROR_ACTION_PRIORITY_WIDTH (32)
+#define NBL_EPRO_MIRROR_ACTION_PRIORITY_DWLEN (1)
+union epro_mirror_action_priority_u {
+ struct epro_mirror_action_priority {
+ u32 car:2; /* [1:0] Default:0x0 RW */
+ u32 dqueue:2; /* [3:2] Default:0x0 RW */
+ u32 dport:2; /* [5:4] Default:0x0 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_MIRROR_ACTION_PRIORITY_DWLEN];
+} __packed;
+
+#define NBL_EPRO_SET_FLAGS_ADDR (0xe74238)
+#define NBL_EPRO_SET_FLAGS_DEPTH (1)
+#define NBL_EPRO_SET_FLAGS_WIDTH (32)
+#define NBL_EPRO_SET_FLAGS_DWLEN (1)
+union epro_set_flags_u {
+ struct epro_set_flags {
+ u32 set_flags:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_EPRO_SET_FLAGS_DWLEN];
+} __packed;
+
+#define NBL_EPRO_CLEAR_FLAGS_ADDR (0xe7423c)
+#define NBL_EPRO_CLEAR_FLAGS_DEPTH (1)
+#define NBL_EPRO_CLEAR_FLAGS_WIDTH (32)
+#define NBL_EPRO_CLEAR_FLAGS_DWLEN (1)
+union epro_clear_flags_u {
+ struct epro_clear_flags {
+ u32 clear_flags:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_EPRO_CLEAR_FLAGS_DWLEN];
+} __packed;
+
+#define NBL_EPRO_RSS_SK_ADDR (0xe74400)
+#define NBL_EPRO_RSS_SK_DEPTH (1)
+#define NBL_EPRO_RSS_SK_WIDTH (320)
+#define NBL_EPRO_RSS_SK_DWLEN (10)
+union epro_rss_sk_u {
+ struct epro_rss_sk {
+ u32 sk_arr[10]; /* [319:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_EPRO_RSS_SK_DWLEN];
+} __packed;
+
+#define NBL_EPRO_VXLAN_SP_ADDR (0xe74500)
+#define NBL_EPRO_VXLAN_SP_DEPTH (1)
+#define NBL_EPRO_VXLAN_SP_WIDTH (32)
+#define NBL_EPRO_VXLAN_SP_DWLEN (1)
+union epro_vxlan_sp_u {
+ struct epro_vxlan_sp {
+ u32 vxlan_tnl_sp_min:16; /* [15:0] Default:0x8000 RW */
+ u32 vxlan_tnl_sp_max:16; /* [31:16] Default:0xee48 RW */
+ } __packed info;
+ u32 data[NBL_EPRO_VXLAN_SP_DWLEN];
+} __packed;
+
+#define NBL_EPRO_LOOP_SCH_COS_DEFAULT_ADDR (0xe74600)
+#define NBL_EPRO_LOOP_SCH_COS_DEFAULT_DEPTH (1)
+#define NBL_EPRO_LOOP_SCH_COS_DEFAULT_WIDTH (32)
+#define NBL_EPRO_LOOP_SCH_COS_DEFAULT_DWLEN (1)
+union epro_loop_sch_cos_default_u {
+ struct epro_loop_sch_cos_default {
+ u32 sch_cos:3; /* [2:0] Default:0x0 RW */
+ u32 pfc_mode:1; /* [3] Default:0x0 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_LOOP_SCH_COS_DEFAULT_DWLEN];
+} __packed;
+
+#define NBL_EPRO_MIRROR_PKT_COS_DEFAULT_ADDR (0xe74604)
+#define NBL_EPRO_MIRROR_PKT_COS_DEFAULT_DEPTH (1)
+#define NBL_EPRO_MIRROR_PKT_COS_DEFAULT_WIDTH (32)
+#define NBL_EPRO_MIRROR_PKT_COS_DEFAULT_DWLEN (1)
+union epro_mirror_pkt_cos_default_u {
+ struct epro_mirror_pkt_cos_default {
+ u32 pkt_cos:3; /* [2:0] Default:0x0 RW */
+ u32 rsv:29; /* [31:3] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_MIRROR_PKT_COS_DEFAULT_DWLEN];
+} __packed;
+
+#define NBL_EPRO_NO_DPORT_REDIRECT_ADDR (0xe7463c)
+#define NBL_EPRO_NO_DPORT_REDIRECT_DEPTH (1)
+#define NBL_EPRO_NO_DPORT_REDIRECT_WIDTH (32)
+#define NBL_EPRO_NO_DPORT_REDIRECT_DWLEN (1)
+union epro_no_dport_redirect_u {
+ struct epro_no_dport_redirect {
+ u32 dport:16; /* [15:0] Default:0x0 RW */
+ u32 dqueue:11; /* [26:16] Default:0x0 RW */
+ u32 dqueue_en:1; /* [27] Default:0x0 RW */
+ u32 rsv:4; /* [31:28] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_NO_DPORT_REDIRECT_DWLEN];
+} __packed;
+
+#define NBL_EPRO_SCH_COS_MAP_ETH0_ADDR (0xe74640)
+#define NBL_EPRO_SCH_COS_MAP_ETH0_DEPTH (8)
+#define NBL_EPRO_SCH_COS_MAP_ETH0_WIDTH (32)
+#define NBL_EPRO_SCH_COS_MAP_ETH0_DWLEN (1)
+union epro_sch_cos_map_eth0_u {
+ struct epro_sch_cos_map_eth0 {
+ u32 pkt_cos:3; /* [2:0] Default:0x0 RW */
+ u32 dscp:6; /* [8:3] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_SCH_COS_MAP_ETH0_DWLEN];
+} __packed;
+#define NBL_EPRO_SCH_COS_MAP_ETH0_REG(r) (NBL_EPRO_SCH_COS_MAP_ETH0_ADDR + \
+ (NBL_EPRO_SCH_COS_MAP_ETH0_DWLEN * 4) * (r))
+
+#define NBL_EPRO_SCH_COS_MAP_ETH1_ADDR (0xe74660)
+#define NBL_EPRO_SCH_COS_MAP_ETH1_DEPTH (8)
+#define NBL_EPRO_SCH_COS_MAP_ETH1_WIDTH (32)
+#define NBL_EPRO_SCH_COS_MAP_ETH1_DWLEN (1)
+union epro_sch_cos_map_eth1_u {
+ struct epro_sch_cos_map_eth1 {
+ u32 pkt_cos:3; /* [2:0] Default:0x0 RW */
+ u32 dscp:6; /* [8:3] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_SCH_COS_MAP_ETH1_DWLEN];
+} __packed;
+#define NBL_EPRO_SCH_COS_MAP_ETH1_REG(r) (NBL_EPRO_SCH_COS_MAP_ETH1_ADDR + \
+ (NBL_EPRO_SCH_COS_MAP_ETH1_DWLEN * 4) * (r))
+
+#define NBL_EPRO_SCH_COS_MAP_ETH2_ADDR (0xe74680)
+#define NBL_EPRO_SCH_COS_MAP_ETH2_DEPTH (8)
+#define NBL_EPRO_SCH_COS_MAP_ETH2_WIDTH (32)
+#define NBL_EPRO_SCH_COS_MAP_ETH2_DWLEN (1)
+union epro_sch_cos_map_eth2_u {
+ struct epro_sch_cos_map_eth2 {
+ u32 pkt_cos:3; /* [2:0] Default:0x0 RW */
+ u32 dscp:6; /* [8:3] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_SCH_COS_MAP_ETH2_DWLEN];
+} __packed;
+#define NBL_EPRO_SCH_COS_MAP_ETH2_REG(r) (NBL_EPRO_SCH_COS_MAP_ETH2_ADDR + \
+ (NBL_EPRO_SCH_COS_MAP_ETH2_DWLEN * 4) * (r))
+
+#define NBL_EPRO_SCH_COS_MAP_ETH3_ADDR (0xe746a0)
+#define NBL_EPRO_SCH_COS_MAP_ETH3_DEPTH (8)
+#define NBL_EPRO_SCH_COS_MAP_ETH3_WIDTH (32)
+#define NBL_EPRO_SCH_COS_MAP_ETH3_DWLEN (1)
+union epro_sch_cos_map_eth3_u {
+ struct epro_sch_cos_map_eth3 {
+ u32 pkt_cos:3; /* [2:0] Default:0x0 RW */
+ u32 dscp:6; /* [8:3] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_SCH_COS_MAP_ETH3_DWLEN];
+} __packed;
+#define NBL_EPRO_SCH_COS_MAP_ETH3_REG(r) (NBL_EPRO_SCH_COS_MAP_ETH3_ADDR + \
+ (NBL_EPRO_SCH_COS_MAP_ETH3_DWLEN * 4) * (r))
+
+#define NBL_EPRO_SCH_COS_MAP_LOOP_ADDR (0xe746c0)
+#define NBL_EPRO_SCH_COS_MAP_LOOP_DEPTH (8)
+#define NBL_EPRO_SCH_COS_MAP_LOOP_WIDTH (32)
+#define NBL_EPRO_SCH_COS_MAP_LOOP_DWLEN (1)
+union epro_sch_cos_map_loop_u {
+ struct epro_sch_cos_map_loop {
+ u32 pkt_cos:3; /* [2:0] Default:0x0 RW */
+ u32 dscp:6; /* [8:3] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_SCH_COS_MAP_LOOP_DWLEN];
+} __packed;
+#define NBL_EPRO_SCH_COS_MAP_LOOP_REG(r) (NBL_EPRO_SCH_COS_MAP_LOOP_ADDR + \
+ (NBL_EPRO_SCH_COS_MAP_LOOP_DWLEN * 4) * (r))
+
+#define NBL_EPRO_PORT_PRI_MDF_EN_ADDR (0xe746e0)
+#define NBL_EPRO_PORT_PRI_MDF_EN_DEPTH (1)
+#define NBL_EPRO_PORT_PRI_MDF_EN_WIDTH (32)
+#define NBL_EPRO_PORT_PRI_MDF_EN_DWLEN (1)
+union epro_port_pri_mdf_en_u {
+ struct epro_port_pri_mdf_en {
+ u32 eth0:1; /* [0] Default:0x0 RW */
+ u32 eth1:1; /* [1] Default:0x0 RW */
+ u32 eth2:1; /* [2] Default:0x0 RW */
+ u32 eth3:1; /* [3] Default:0x0 RW */
+ u32 loop:1; /* [4] Default:0x0 RW */
+ u32 rsv:27; /* [31:5] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_PORT_PRI_MDF_EN_DWLEN];
+} __packed;
+
+#define NBL_EPRO_CFG_TEST_ADDR (0xe7480c)
+#define NBL_EPRO_CFG_TEST_DEPTH (1)
+#define NBL_EPRO_CFG_TEST_WIDTH (32)
+#define NBL_EPRO_CFG_TEST_DWLEN (1)
+union epro_cfg_test_u {
+ struct epro_cfg_test {
+ u32 test:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_EPRO_CFG_TEST_DWLEN];
+} __packed;
+
+#define NBL_EPRO_BP_STATE_ADDR (0xe74b00)
+#define NBL_EPRO_BP_STATE_DEPTH (1)
+#define NBL_EPRO_BP_STATE_WIDTH (32)
+#define NBL_EPRO_BP_STATE_DWLEN (1)
+union epro_bp_state_u {
+ struct epro_bp_state {
+ u32 in_bp:1; /* [0] Default:0x0 RO */
+ u32 out_bp:1; /* [1] Default:0x0 RO */
+ u32 inter_bp:1; /* [2] Default:0x0 RO */
+ u32 rsv:29; /* [31:3] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_BP_STATE_DWLEN];
+} __packed;
+
+#define NBL_EPRO_BP_HISTORY_ADDR (0xe74b04)
+#define NBL_EPRO_BP_HISTORY_DEPTH (1)
+#define NBL_EPRO_BP_HISTORY_WIDTH (32)
+#define NBL_EPRO_BP_HISTORY_DWLEN (1)
+union epro_bp_history_u {
+ struct epro_bp_history {
+ u32 in_bp:1; /* [0] Default:0x0 RC */
+ u32 out_bp:1; /* [1] Default:0x0 RC */
+ u32 inter_bp:1; /* [2] Default:0x0 RC */
+ u32 rsv:29; /* [31:3] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_BP_HISTORY_DWLEN];
+} __packed;
+
+#define NBL_EPRO_MT_ADDR (0xe75400)
+#define NBL_EPRO_MT_DEPTH (16)
+#define NBL_EPRO_MT_WIDTH (64)
+#define NBL_EPRO_MT_DWLEN (2)
+#define NBL_EPRO_MT_MAX (8)
+union epro_mt_u {
+ struct epro_mt {
+ u32 dport:16; /* [15:0] Default:0x0 RW */
+ u32 dqueue:11; /* [26:16] Default:0x0 RW */
+ u32 car_en:1; /* [27] Default:0x0 RW */
+ u32 car_id:10; /* [37:28] Default:0x0 RW */
+ u32 vld:1; /* [38] Default:0x0 RW */
+ u32 rsv:25; /* [63:39] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_MT_DWLEN];
+} __packed;
+#define NBL_EPRO_MT_REG(r) (NBL_EPRO_MT_ADDR + \
+ (NBL_EPRO_MT_DWLEN * 4) * (r))
+
+#define NBL_EPRO_KG_TCAM_ADDR (0xe75480)
+#define NBL_EPRO_KG_TCAM_DEPTH (16)
+#define NBL_EPRO_KG_TCAM_WIDTH (64)
+#define NBL_EPRO_KG_TCAM_DWLEN (2)
+union epro_kg_tcam_u {
+ struct epro_kg_tcam {
+ u32 mask:16; /* [15:0] Default:0x0 RW */
+ u32 data:16; /* [31:16] Default:0x0 RW */
+ u32 valid_bit:1; /* [32] Default:0x0 RW */
+ u32 rsv:31; /* [63:33] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_KG_TCAM_DWLEN];
+} __packed;
+#define NBL_EPRO_KG_TCAM_REG(r) (NBL_EPRO_KG_TCAM_ADDR + \
+ (NBL_EPRO_KG_TCAM_DWLEN * 4) * (r))
+
+#define NBL_EPRO_VPT_ADDR (0xe78000)
+#define NBL_EPRO_VPT_DEPTH (1024)
+#define NBL_EPRO_VPT_WIDTH (64)
+#define NBL_EPRO_VPT_DWLEN (2)
+union epro_vpt_u {
+ struct epro_vpt {
+ u32 cvlan:16; /* [15:0] Default:0x0 RW */
+ u32 svlan:16; /* [31:16] Default:0x0 RW */
+ u32 fwd:1; /* [32] Default:0x0 RW */
+ u32 mirror_en:1; /* [33] Default:0x0 RW */
+ u32 mirror_id:4; /* [37:34] Default:0x0 RW */
+ u32 car_en:1; /* [38] Default:0x0 RW */
+ u32 car_id:10; /* [48:39] Default:0x0 RW */
+ u32 pop_vlan:2; /* [50:49] Default:0x0 RW */
+ u32 push_vlan:2; /* [52:51] Default:0x0 RW */
+ u32 replace_vlan:2; /* [54:53] Default:0x0 RW */
+ u32 rss_alg_sel:1; /* [55] Default:0x0 RW */
+ u32 rss_key_type_btm:2; /* [57:56] Default:0x0 RW */
+ u32 vld:1; /* [58] Default:0x0 RW */
+ u32 rsv:5; /* [63:59] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_VPT_DWLEN];
+} __packed;
+#define NBL_EPRO_VPT_REG(r) (NBL_EPRO_VPT_ADDR + \
+ (NBL_EPRO_VPT_DWLEN * 4) * (r))
+
+#define NBL_EPRO_EPT_ADDR (0xe75800)
+#define NBL_EPRO_EPT_DEPTH (8)
+#define NBL_EPRO_EPT_WIDTH (64)
+#define NBL_EPRO_EPT_DWLEN (2)
+union epro_ept_u {
+ struct epro_ept {
+ u32 cvlan:16; /* [15:0] Default:0x0 RW */
+ u32 svlan:16; /* [31:16] Default:0x0 RW */
+ u32 fwd:1; /* [32] Default:0x0 RW */
+ u32 mirror_en:1; /* [33] Default:0x0 RW */
+ u32 mirror_id:4; /* [37:34] Default:0x0 RW */
+ u32 pop_vlan:2; /* [39:38] Default:0x0 RW */
+ u32 push_vlan:2; /* [41:40] Default:0x0 RW */
+ u32 replace_vlan:2; /* [43:42] Default:0x0 RW */
+ u32 lag_alg_sel:2; /* [45:44] Default:0x0 RW */
+ u32 lag_port_btm:4; /* [49:46] Default:0x0 RW */
+ u32 lag_l2_protect_en:1; /* [50] Default:0x0 RW */
+ u32 pfc_sch_cos_default:3; /* [53:51] Default:0x0 RW */
+ u32 pfc_mode:1; /* [54] Default:0x0 RW */
+ u32 vld:1; /* [55] Default:0x0 RW */
+ u32 rsv:8; /* [63:56] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_EPT_DWLEN];
+} __packed;
+#define NBL_EPRO_EPT_REG(r) (NBL_EPRO_EPT_ADDR + \
+ (NBL_EPRO_EPT_DWLEN * 4) * (r))
+
+#define NBL_EPRO_AFT_ADDR (0xe75900)
+#define NBL_EPRO_AFT_DEPTH (16)
+#define NBL_EPRO_AFT_WIDTH (64)
+#define NBL_EPRO_AFT_DWLEN (2)
+union epro_aft_u {
+ struct epro_aft {
+ u32 action_filter_btm_arr[2]; /* [63:0] Default:0x0 RW */
+ } __packed info;
+ u64 data;
+} __packed;
+#define NBL_EPRO_AFT_REG(r) (NBL_EPRO_AFT_ADDR + \
+ (NBL_EPRO_AFT_DWLEN * 4) * (r))
+
+#define NBL_EPRO_RSS_PT_ADDR (0xe76000)
+#define NBL_EPRO_RSS_PT_DEPTH (1024)
+#define NBL_EPRO_RSS_PT_WIDTH (64)
+#define NBL_EPRO_RSS_PT_DWLEN (2)
+union epro_rss_pt_u {
+ struct epro_rss_pt {
+ u32 entry_size:3; /* [2:0] Default:0x0 RW */
+ u32 offset1:14; /* [16:3] Default:0x0 RW */
+ u32 offset1_vld:1; /* [17:17] Default:0x0 RW */
+ u32 offset0:14; /* [31:18] Default:0x0 RW */
+ u32 offset0_vld:1; /* [32] Default:0x0 RW */
+ u32 vld:1; /* [33] Default:0x0 RW */
+ u32 rsv:30; /* [63:34] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_RSS_PT_DWLEN];
+} __packed;
+#define NBL_EPRO_RSS_PT_REG(r) (NBL_EPRO_RSS_PT_ADDR + \
+ (NBL_EPRO_RSS_PT_DWLEN * 4) * (r))
+
+#define NBL_EPRO_ECPVPT_ADDR (0xe7a000)
+#define NBL_EPRO_ECPVPT_DEPTH (256)
+#define NBL_EPRO_ECPVPT_WIDTH (32)
+#define NBL_EPRO_ECPVPT_DWLEN (1)
+union epro_ecpvpt_u {
+ struct epro_ecpvpt {
+ u32 encap_cvlan_vld0:1; /* [0] Default:0x0 RW */
+ u32 encap_svlan_vld0:1; /* [1] Default:0x0 RW */
+ u32 encap_vlan_vld1_15:30; /* [31:2] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_EPRO_ECPVPT_DWLEN];
+} __packed;
+#define NBL_EPRO_ECPVPT_REG(r) (NBL_EPRO_ECPVPT_ADDR + \
+ (NBL_EPRO_ECPVPT_DWLEN * 4) * (r))
+
+#define NBL_EPRO_ECPIPT_ADDR (0xe7b000)
+#define NBL_EPRO_ECPIPT_DEPTH (128)
+#define NBL_EPRO_ECPIPT_WIDTH (32)
+#define NBL_EPRO_ECPIPT_DWLEN (1)
+union epro_ecpipt_u {
+ struct epro_ecpipt {
+ u32 encap_ip_type0:1; /* [0] Default:0x0 RW */
+ u32 encap_ip_type1_31:31; /* [31:1] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_EPRO_ECPIPT_DWLEN];
+} __packed;
+#define NBL_EPRO_ECPIPT_REG(r) (NBL_EPRO_ECPIPT_ADDR + \
+ (NBL_EPRO_ECPIPT_DWLEN * 4) * (r))
+
+#define NBL_EPRO_RSS_RET_ADDR (0xe7c000)
+#define NBL_EPRO_RSS_RET_DEPTH (8192)
+#define NBL_EPRO_RSS_RET_WIDTH (32)
+#define NBL_EPRO_RSS_RET_DWLEN (1)
+union epro_rss_ret_u {
+ struct epro_rss_ret {
+ u32 dqueue0:11; /* [10:0] Default:0x0 RW */
+ u32 vld0:1; /* [11] Default:0x0 RW */
+ u32 rsv1:4; /* [15:12] Default:0x0 RO */
+ u32 dqueue1:11; /* [26:16] Default:0x0 RW */
+ u32 vld1:1; /* [27] Default:0x0 RW */
+ u32 rsv:4; /* [31:28] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_EPRO_RSS_RET_DWLEN];
+} __packed;
+#define NBL_EPRO_RSS_RET_REG(r) (NBL_EPRO_RSS_RET_ADDR + \
+ (NBL_EPRO_RSS_RET_DWLEN * 4) * (r))
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h
new file mode 100644
index 000000000000..5f74a458a09a
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h
@@ -0,0 +1,1397 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+// Code generated by interstellar. DO NOT EDIT.
+// Compatible with leonis RTL tag 0710
+
+#ifndef NBL_IPRO_H
+#define NBL_IPRO_H 1
+
+#include <linux/types.h>
+
+#define NBL_IPRO_BASE (0x00B04000)
+
+#define NBL_IPRO_INT_STATUS_ADDR (0xb04000)
+#define NBL_IPRO_INT_STATUS_DEPTH (1)
+#define NBL_IPRO_INT_STATUS_WIDTH (32)
+#define NBL_IPRO_INT_STATUS_DWLEN (1)
+union ipro_int_status_u {
+ struct ipro_int_status {
+ u32 fatal_err:1; /* [0] Default:0x0 RWC */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 RWC */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 RWC */
+ u32 cif_err:1; /* [3] Default:0x0 RWC */
+ u32 input_err:1; /* [4] Default:0x0 RWC */
+ u32 cfg_err:1; /* [5] Default:0x0 RWC */
+ u32 data_ucor_err:1; /* [6] Default:0x0 RWC */
+ u32 rsv:25; /* [31:7] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_INT_STATUS_DWLEN];
+} __packed;
+
+#define NBL_IPRO_INT_MASK_ADDR (0xb04004)
+#define NBL_IPRO_INT_MASK_DEPTH (1)
+#define NBL_IPRO_INT_MASK_WIDTH (32)
+#define NBL_IPRO_INT_MASK_DWLEN (1)
+union ipro_int_mask_u {
+ struct ipro_int_mask {
+ u32 fatal_err:1; /* [0] Default:0x0 RW */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 RW */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 RW */
+ u32 cif_err:1; /* [3] Default:0x0 RW */
+ u32 input_err:1; /* [4] Default:0x0 RW */
+ u32 cfg_err:1; /* [5] Default:0x0 RW */
+ u32 data_ucor_err:1; /* [6] Default:0x0 RW */
+ u32 rsv:25; /* [31:7] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_INT_MASK_DWLEN];
+} __packed;
+
+#define NBL_IPRO_INT_SET_ADDR (0xb04008)
+#define NBL_IPRO_INT_SET_DEPTH (1)
+#define NBL_IPRO_INT_SET_WIDTH (32)
+#define NBL_IPRO_INT_SET_DWLEN (1)
+union ipro_int_set_u {
+ struct ipro_int_set {
+ u32 fatal_err:1; /* [0] Default:0x0 WO */
+ u32 fifo_uflw_err:1; /* [1] Default:0x0 WO */
+ u32 fifo_dflw_err:1; /* [2] Default:0x0 WO */
+ u32 cif_err:1; /* [3] Default:0x0 WO */
+ u32 input_err:1; /* [4] Default:0x0 WO */
+ u32 cfg_err:1; /* [5] Default:0x0 WO */
+ u32 data_ucor_err:1; /* [6] Default:0x0 WO */
+ u32 rsv:25; /* [31:7] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_INT_SET_DWLEN];
+} __packed;
+
+#define NBL_IPRO_INIT_DONE_ADDR (0xb0400c)
+#define NBL_IPRO_INIT_DONE_DEPTH (1)
+#define NBL_IPRO_INIT_DONE_WIDTH (32)
+#define NBL_IPRO_INIT_DONE_DWLEN (1)
+union ipro_init_done_u {
+ struct ipro_init_done {
+ u32 done:1; /* [0] Default:0x0 RO */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_INIT_DONE_DWLEN];
+} __packed;
+
+#define NBL_IPRO_CIF_ERR_INFO_ADDR (0xb04040)
+#define NBL_IPRO_CIF_ERR_INFO_DEPTH (1)
+#define NBL_IPRO_CIF_ERR_INFO_WIDTH (32)
+#define NBL_IPRO_CIF_ERR_INFO_DWLEN (1)
+union ipro_cif_err_info_u {
+ struct ipro_cif_err_info {
+ u32 addr:30; /* [29:0] Default:0x0 RO */
+ u32 wr_err:1; /* [30] Default:0x0 RO */
+ u32 ucor_err:1; /* [31] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_CIF_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_IPRO_INPUT_ERR_INFO_ADDR (0xb04048)
+#define NBL_IPRO_INPUT_ERR_INFO_DEPTH (1)
+#define NBL_IPRO_INPUT_ERR_INFO_WIDTH (32)
+#define NBL_IPRO_INPUT_ERR_INFO_DWLEN (1)
+union ipro_input_err_info_u {
+ struct ipro_input_err_info {
+ u32 id:2; /* [1:0] Default:0x0 RO */
+ u32 rsv:30; /* [31:2] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_INPUT_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_IPRO_CFG_ERR_INFO_ADDR (0xb04050)
+#define NBL_IPRO_CFG_ERR_INFO_DEPTH (1)
+#define NBL_IPRO_CFG_ERR_INFO_WIDTH (32)
+#define NBL_IPRO_CFG_ERR_INFO_DWLEN (1)
+union ipro_cfg_err_info_u {
+ struct ipro_cfg_err_info {
+ u32 id:2; /* [1:0] Default:0x0 RO */
+ u32 rsv:30; /* [31:2] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_CFG_ERR_INFO_DWLEN];
+} __packed;
+
+#define NBL_IPRO_CAR_CTRL_ADDR (0xb04100)
+#define NBL_IPRO_CAR_CTRL_DEPTH (1)
+#define NBL_IPRO_CAR_CTRL_WIDTH (32)
+#define NBL_IPRO_CAR_CTRL_DWLEN (1)
+union ipro_car_ctrl_u {
+ struct ipro_car_ctrl {
+ u32 sctr_car:1; /* [0] Default:0x1 RW */
+ u32 rctr_car:1; /* [1] Default:0x1 RW */
+ u32 rc_car:1; /* [2] Default:0x1 RW */
+ u32 tbl_rc_car:1; /* [3] Default:0x1 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_CAR_CTRL_DWLEN];
+} __packed;
+
+#define NBL_IPRO_INIT_START_ADDR (0xb04180)
+#define NBL_IPRO_INIT_START_DEPTH (1)
+#define NBL_IPRO_INIT_START_WIDTH (32)
+#define NBL_IPRO_INIT_START_DWLEN (1)
+union ipro_init_start_u {
+ struct ipro_init_start {
+ u32 init_start:1; /* [0] Default:0x0 WO */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_INIT_START_DWLEN];
+} __packed;
+
+#define NBL_IPRO_CREDIT_TOKEN_ADDR (0xb041c0)
+#define NBL_IPRO_CREDIT_TOKEN_DEPTH (1)
+#define NBL_IPRO_CREDIT_TOKEN_WIDTH (32)
+#define NBL_IPRO_CREDIT_TOKEN_DWLEN (1)
+union ipro_credit_token_u {
+ struct ipro_credit_token {
+ u32 up_token_num:8; /* [7:0] Default:0x80 RW */
+ u32 down_token_num:8; /* [15:8] Default:0x80 RW */
+ u32 up_init_vld:1; /* [16] Default:0x0 WO */
+ u32 down_init_vld:1; /* [17] Default:0x0 WO */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_CREDIT_TOKEN_DWLEN];
+} __packed;
+
+#define NBL_IPRO_AM_SET_FLAG_ADDR (0xb041e0)
+#define NBL_IPRO_AM_SET_FLAG_DEPTH (1)
+#define NBL_IPRO_AM_SET_FLAG_WIDTH (32)
+#define NBL_IPRO_AM_SET_FLAG_DWLEN (1)
+union ipro_am_set_flag_u {
+ struct ipro_am_set_flag {
+ u32 set_flag:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_AM_SET_FLAG_DWLEN];
+} __packed;
+
+#define NBL_IPRO_AM_CLEAR_FLAG_ADDR (0xb041e4)
+#define NBL_IPRO_AM_CLEAR_FLAG_DEPTH (1)
+#define NBL_IPRO_AM_CLEAR_FLAG_WIDTH (32)
+#define NBL_IPRO_AM_CLEAR_FLAG_DWLEN (1)
+union ipro_am_clear_flag_u {
+ struct ipro_am_clear_flag {
+ u32 clear_flag:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_AM_CLEAR_FLAG_DWLEN];
+} __packed;
+
+#define NBL_IPRO_FLAG_OFFSET_0_ADDR (0xb04200)
+#define NBL_IPRO_FLAG_OFFSET_0_DEPTH (1)
+#define NBL_IPRO_FLAG_OFFSET_0_WIDTH (32)
+#define NBL_IPRO_FLAG_OFFSET_0_DWLEN (1)
+union ipro_flag_offset_0_u {
+ struct ipro_flag_offset_0 {
+ u32 dir_offset_en:1; /* [0] Default:0x1 RW */
+ u32 dir_offset:5; /* [5:1] Default:0x00 RW */
+ u32 rsv1:2; /* [7:6] Default:0x0 RO */
+ u32 hw_flow_offset_en:1; /* [8] Default:0x1 RW */
+ u32 hw_flow_offset:5; /* [13:9] Default:0xb RW */
+ u32 rsv:18; /* [31:14] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_FLAG_OFFSET_0_DWLEN];
+} __packed;
+
+#define NBL_IPRO_DROP_NXT_STAGE_ADDR (0xb04210)
+#define NBL_IPRO_DROP_NXT_STAGE_DEPTH (1)
+#define NBL_IPRO_DROP_NXT_STAGE_WIDTH (32)
+#define NBL_IPRO_DROP_NXT_STAGE_DWLEN (1)
+union ipro_drop_nxt_stage_u {
+ struct ipro_drop_nxt_stage {
+ u32 stage:4; /* [3:0] Default:0xf RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_DROP_NXT_STAGE_DWLEN];
+} __packed;
+
+#define NBL_IPRO_FWD_ACTION_PRI_ADDR (0xb04220)
+#define NBL_IPRO_FWD_ACTION_PRI_DEPTH (1)
+#define NBL_IPRO_FWD_ACTION_PRI_WIDTH (32)
+#define NBL_IPRO_FWD_ACTION_PRI_DWLEN (1)
+union ipro_fwd_action_pri_u {
+ struct ipro_fwd_action_pri {
+ u32 dqueue:2; /* [1:0] Default:0x0 RW */
+ u32 set_dport:2; /* [3:2] Default:0x0 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_FWD_ACTION_PRI_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MTU_CHECK_CTRL_ADDR (0xb0427c)
+#define NBL_IPRO_MTU_CHECK_CTRL_DEPTH (1)
+#define NBL_IPRO_MTU_CHECK_CTRL_WIDTH (32)
+#define NBL_IPRO_MTU_CHECK_CTRL_DWLEN (1)
+union ipro_mtu_check_ctrl_u {
+ struct ipro_mtu_check_ctrl {
+ u32 set_dport:16; /* [15:0] Default:0xFFFF RW */
+ u32 set_dport_pri:2; /* [17:16] Default:0x3 RW */
+ u32 proc_done:1; /* [18] Default:0x1 RW */
+ u32 rsv:13; /* [31:19] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MTU_CHECK_CTRL_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MTU_SEL_ADDR (0xb04280)
+#define NBL_IPRO_MTU_SEL_DEPTH (8)
+#define NBL_IPRO_MTU_SEL_WIDTH (32)
+#define NBL_IPRO_MTU_SEL_DWLEN (1)
+union ipro_mtu_sel_u {
+ struct ipro_mtu_sel {
+ u32 mtu_1:16; /* [15:0] Default:0x0 RW */
+ u32 mtu_0:16; /* [31:16] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MTU_SEL_DWLEN];
+} __packed;
+#define NBL_IPRO_MTU_SEL_REG(r) (NBL_IPRO_MTU_SEL_ADDR + \
+ (NBL_IPRO_MTU_SEL_DWLEN * 4) * (r))
+
+#define NBL_IPRO_UDL_PKT_FLT_DMAC_ADDR (0xb04300)
+#define NBL_IPRO_UDL_PKT_FLT_DMAC_DEPTH (16)
+#define NBL_IPRO_UDL_PKT_FLT_DMAC_WIDTH (64)
+#define NBL_IPRO_UDL_PKT_FLT_DMAC_DWLEN (2)
+union ipro_udl_pkt_flt_dmac_u {
+ struct ipro_udl_pkt_flt_dmac {
+ u32 dmac_l:32; /* [47:0] Default:0x0 RW */
+ u32 dmac_h:16; /* [47:0] Default:0x0 RW */
+ u32 rsv:16; /* [63:48] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_UDL_PKT_FLT_DMAC_DWLEN];
+} __packed;
+#define NBL_IPRO_UDL_PKT_FLT_DMAC_REG(r) (NBL_IPRO_UDL_PKT_FLT_DMAC_ADDR + \
+ (NBL_IPRO_UDL_PKT_FLT_DMAC_DWLEN * 4) * (r))
+
+#define NBL_IPRO_UDL_PKT_FLT_VLAN_ADDR (0xb04380)
+#define NBL_IPRO_UDL_PKT_FLT_VLAN_DEPTH (16)
+#define NBL_IPRO_UDL_PKT_FLT_VLAN_WIDTH (32)
+#define NBL_IPRO_UDL_PKT_FLT_VLAN_DWLEN (1)
+union ipro_udl_pkt_flt_vlan_u {
+ struct ipro_udl_pkt_flt_vlan {
+ u32 vlan_0:12; /* [11:0] Default:0x0 RW */
+ u32 vlan_1:12; /* [23:12] Default:0x0 RW */
+ u32 vlan_layer:2; /* [25:24] Default:0x0 RW */
+ u32 rsv:6; /* [31:26] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_UDL_PKT_FLT_VLAN_DWLEN];
+} __packed;
+#define NBL_IPRO_UDL_PKT_FLT_VLAN_REG(r) (NBL_IPRO_UDL_PKT_FLT_VLAN_ADDR + \
+ (NBL_IPRO_UDL_PKT_FLT_VLAN_DWLEN * 4) * (r))
+
+#define NBL_IPRO_UDL_PKT_FLT_CTRL_ADDR (0xb043c0)
+#define NBL_IPRO_UDL_PKT_FLT_CTRL_DEPTH (1)
+#define NBL_IPRO_UDL_PKT_FLT_CTRL_WIDTH (32)
+#define NBL_IPRO_UDL_PKT_FLT_CTRL_DWLEN (1)
+union ipro_udl_pkt_flt_ctrl_u {
+ struct ipro_udl_pkt_flt_ctrl {
+ u32 vld:16; /* [15:0] Default:0x0 RW */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_UDL_PKT_FLT_CTRL_DWLEN];
+} __packed;
+
+#define NBL_IPRO_UDL_PKT_FLT_ACTION_ADDR (0xb043c4)
+#define NBL_IPRO_UDL_PKT_FLT_ACTION_DEPTH (1)
+#define NBL_IPRO_UDL_PKT_FLT_ACTION_WIDTH (32)
+#define NBL_IPRO_UDL_PKT_FLT_ACTION_DWLEN (1)
+union ipro_udl_pkt_flt_action_u {
+ struct ipro_udl_pkt_flt_action {
+ u32 dqueue:11; /* [10:0] Default:0x0 RW */
+ u32 dqueue_en:1; /* [11] Default:0x0 RW */
+ u32 rsv:2; /* [13:12] Default:0x0 RO */
+ u32 proc_done:1; /* [14] Default:0x0 RW */
+ u32 set_dport_en:1; /* [15] Default:0x0 RW */
+ u32 set_dport:16; /* [31:16] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_UDL_PKT_FLT_ACTION_DWLEN];
+} __packed;
+
+#define NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_ADDR (0xb043e0)
+#define NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_DEPTH (1)
+#define NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_WIDTH (32)
+#define NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_DWLEN (1)
+union ipro_anti_fake_addr_errcode_u {
+ struct ipro_anti_fake_addr_errcode {
+ u32 num:4; /* [3:0] Default:0xA RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_DWLEN];
+} __packed;
+
+#define NBL_IPRO_ANTI_FAKE_ADDR_ACTION_ADDR (0xb043e4)
+#define NBL_IPRO_ANTI_FAKE_ADDR_ACTION_DEPTH (1)
+#define NBL_IPRO_ANTI_FAKE_ADDR_ACTION_WIDTH (32)
+#define NBL_IPRO_ANTI_FAKE_ADDR_ACTION_DWLEN (1)
+union ipro_anti_fake_addr_action_u {
+ struct ipro_anti_fake_addr_action {
+ u32 dqueue:11; /* [10:0] Default:0x0 RW */
+ u32 dqueue_en:1; /* [11] Default:0x0 RW */
+ u32 rsv:2; /* [13:12] Default:0x0 RO */
+ u32 proc_done:1; /* [14] Default:0x1 RW */
+ u32 set_dport_en:1; /* [15] Default:0x1 RW */
+ u32 set_dport:16; /* [31:16] Default:0xFFFF RW */
+ } __packed info;
+ u32 data[NBL_IPRO_ANTI_FAKE_ADDR_ACTION_DWLEN];
+} __packed;
+
+#define NBL_IPRO_VLAN_NUM_CHK_ERRCODE_ADDR (0xb043f0)
+#define NBL_IPRO_VLAN_NUM_CHK_ERRCODE_DEPTH (1)
+#define NBL_IPRO_VLAN_NUM_CHK_ERRCODE_WIDTH (32)
+#define NBL_IPRO_VLAN_NUM_CHK_ERRCODE_DWLEN (1)
+union ipro_vlan_num_chk_errcode_u {
+ struct ipro_vlan_num_chk_errcode {
+ u32 num:4; /* [3:0] Default:0x1 RW */
+ u32 rsv:28; /* [31:4] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_VLAN_NUM_CHK_ERRCODE_DWLEN];
+} __packed;
+
+#define NBL_IPRO_VLAN_NUM_CHK_ACTION_ADDR (0xb043f4)
+#define NBL_IPRO_VLAN_NUM_CHK_ACTION_DEPTH (1)
+#define NBL_IPRO_VLAN_NUM_CHK_ACTION_WIDTH (32)
+#define NBL_IPRO_VLAN_NUM_CHK_ACTION_DWLEN (1)
+union ipro_vlan_num_chk_action_u {
+ struct ipro_vlan_num_chk_action {
+ u32 dqueue:11; /* [10:0] Default:0x0 RW */
+ u32 dqueue_en:1; /* [11] Default:0x0 RW */
+ u32 rsv:2; /* [13:12] Default:0x0 RO */
+ u32 proc_done:1; /* [14] Default:0x1 RW */
+ u32 set_dport_en:1; /* [15] Default:0x1 RW */
+ u32 set_dport:16; /* [31:16] Default:0xFFFF RW */
+ } __packed info;
+ u32 data[NBL_IPRO_VLAN_NUM_CHK_ACTION_DWLEN];
+} __packed;
+
+#define NBL_IPRO_TCP_STATE_PROBE_ADDR (0xb04400)
+#define NBL_IPRO_TCP_STATE_PROBE_DEPTH (1)
+#define NBL_IPRO_TCP_STATE_PROBE_WIDTH (32)
+#define NBL_IPRO_TCP_STATE_PROBE_DWLEN (1)
+union ipro_tcp_state_probe_u {
+ struct ipro_tcp_state_probe {
+ u32 up_chk_en:1; /* [0] Default:0x0 RW */
+ u32 dn_chk_en:1; /* [1] Default:0x0 RW */
+ u32 rsv:14; /* [15:2] Default:0x0 RO */
+ u32 up_bitmap:8; /* [23:16] Default:0x0 RW */
+ u32 dn_bitmap:8; /* [31:24] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_TCP_STATE_PROBE_DWLEN];
+} __packed;
+
+#define NBL_IPRO_TCP_STATE_UP_ACTION_ADDR (0xb04404)
+#define NBL_IPRO_TCP_STATE_UP_ACTION_DEPTH (1)
+#define NBL_IPRO_TCP_STATE_UP_ACTION_WIDTH (32)
+#define NBL_IPRO_TCP_STATE_UP_ACTION_DWLEN (1)
+union ipro_tcp_state_up_action_u {
+ struct ipro_tcp_state_up_action {
+ u32 dqueue:11; /* [10:0] Default:0x0 RW */
+ u32 dqueue_en:1; /* [11] Default:0x0 RW */
+ u32 rsv:2; /* [13:12] Default:0x0 RO */
+ u32 proc_done:1; /* [14] Default:0x0 RW */
+ u32 set_dport_en:1; /* [15] Default:0x0 RW */
+ u32 set_dport:16; /* [31:16] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_TCP_STATE_UP_ACTION_DWLEN];
+} __packed;
+
+#define NBL_IPRO_TCP_STATE_DN_ACTION_ADDR (0xb04408)
+#define NBL_IPRO_TCP_STATE_DN_ACTION_DEPTH (1)
+#define NBL_IPRO_TCP_STATE_DN_ACTION_WIDTH (32)
+#define NBL_IPRO_TCP_STATE_DN_ACTION_DWLEN (1)
+union ipro_tcp_state_dn_action_u {
+ struct ipro_tcp_state_dn_action {
+ u32 dqueue:11; /* [10:0] Default:0x0 RW */
+ u32 dqueue_en:1; /* [11] Default:0x0 RW */
+ u32 rsv:2; /* [13:12] Default:0x0 RO */
+ u32 proc_done:1; /* [14] Default:0x0 RW */
+ u32 set_dport_en:1; /* [15] Default:0x0 RW */
+ u32 set_dport:16; /* [31:16] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_TCP_STATE_DN_ACTION_DWLEN];
+} __packed;
+
+#define NBL_IPRO_FWD_ACTION_ID_ADDR (0xb04440)
+#define NBL_IPRO_FWD_ACTION_ID_DEPTH (1)
+#define NBL_IPRO_FWD_ACTION_ID_WIDTH (32)
+#define NBL_IPRO_FWD_ACTION_ID_DWLEN (1)
+union ipro_fwd_action_id_u {
+ struct ipro_fwd_action_id {
+ u32 mirror_index:6; /* [5:0] Default:0x8 RW */
+ u32 dport:6; /* [11:6] Default:0x9 RW */
+ u32 dqueue:6; /* [17:12] Default:0xA RW */
+ u32 car:6; /* [23:18] Default:0x5 RW */
+ u32 rsv:8; /* [31:24] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_FWD_ACTION_ID_DWLEN];
+} __packed;
+
+#define NBL_IPRO_PED_ACTION_ID_ADDR (0xb04448)
+#define NBL_IPRO_PED_ACTION_ID_DEPTH (1)
+#define NBL_IPRO_PED_ACTION_ID_WIDTH (32)
+#define NBL_IPRO_PED_ACTION_ID_DWLEN (1)
+union ipro_ped_action_id_u {
+ struct ipro_ped_action_id {
+ u32 encap:6; /* [5:0] Default:0x2E RW */
+ u32 decap:6; /* [11:6] Default:0x2F RW */
+ u32 rsv:20; /* [31:12] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_PED_ACTION_ID_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_HIT_ACTION_ADDR (0xb04510)
+#define NBL_IPRO_MNG_HIT_ACTION_DEPTH (8)
+#define NBL_IPRO_MNG_HIT_ACTION_WIDTH (32)
+#define NBL_IPRO_MNG_HIT_ACTION_DWLEN (1)
+union ipro_mng_hit_action_u {
+ struct ipro_mng_hit_action {
+ u32 data:24; /* [23:0] Default:0x0 RW */
+ u32 rsv:8; /* [31:24] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_HIT_ACTION_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_HIT_ACTION_REG(r) (NBL_IPRO_MNG_HIT_ACTION_ADDR + \
+ (NBL_IPRO_MNG_HIT_ACTION_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_DECISION_FLT_0_ADDR (0xb04530)
+#define NBL_IPRO_MNG_DECISION_FLT_0_DEPTH (4)
+#define NBL_IPRO_MNG_DECISION_FLT_0_WIDTH (32)
+#define NBL_IPRO_MNG_DECISION_FLT_0_DWLEN (1)
+union ipro_mng_decision_flt_0_u {
+ struct ipro_mng_decision_flt_0 {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 pkt_len_and:1; /* [1] Default:0x0 RW */
+ u32 flow_ctrl_and:1; /* [2] Default:0x0 RW */
+ u32 ncsi_and:1; /* [3] Default:0x0 RW */
+ u32 eth_id:2; /* [5:4] Default:0x0 RW */
+ u32 rsv:26; /* [31:6] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_DECISION_FLT_0_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_DECISION_FLT_0_REG(r) (NBL_IPRO_MNG_DECISION_FLT_0_ADDR + \
+ (NBL_IPRO_MNG_DECISION_FLT_0_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_DECISION_FLT_1_ADDR (0xb04540)
+#define NBL_IPRO_MNG_DECISION_FLT_1_DEPTH (4)
+#define NBL_IPRO_MNG_DECISION_FLT_1_WIDTH (32)
+#define NBL_IPRO_MNG_DECISION_FLT_1_DWLEN (1)
+union ipro_mng_decision_flt_1_u {
+ struct ipro_mng_decision_flt_1 {
+ u32 dmac_and:4; /* [3:0] Default:0x0 RW */
+ u32 brcast_and:1; /* [4] Default:0x0 RW */
+ u32 mulcast_and:1; /* [5] Default:0x0 RW */
+ u32 vlan_and:8; /* [13:6] Default:0x0 RW */
+ u32 ipv4_dip_and:4; /* [17:14] Default:0x0 RW */
+ u32 ipv6_dip_and:4; /* [21:18] Default:0x0 RW */
+ u32 ethertype_and:4; /* [25:22] Default:0x0 RW */
+ u32 brcast_or:1; /* [26] Default:0x0 RW */
+ u32 icmpv4_or:1; /* [27] Default:0x0 RW */
+ u32 mld_or:4; /* [31:28] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_DECISION_FLT_1_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_DECISION_FLT_1_REG(r) (NBL_IPRO_MNG_DECISION_FLT_1_ADDR + \
+ (NBL_IPRO_MNG_DECISION_FLT_1_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_DECISION_FLT_2_ADDR (0xb04550)
+#define NBL_IPRO_MNG_DECISION_FLT_2_DEPTH (4)
+#define NBL_IPRO_MNG_DECISION_FLT_2_WIDTH (32)
+#define NBL_IPRO_MNG_DECISION_FLT_2_DWLEN (1)
+union ipro_mng_decision_flt_2_u {
+ struct ipro_mng_decision_flt_2 {
+ u32 neighbor_or:4; /* [3:0] Default:0x0 RW */
+ u32 port_or:16; /* [19:4] Default:0x0 RW */
+ u32 ethertype_or:4; /* [23:20] Default:0x0 RW */
+ u32 arp_rsp_or:2; /* [25:24] Default:0x0 RW */
+ u32 arp_req_or:2; /* [27:26] Default:0x0 RW */
+ u32 dmac_or:4; /* [31:28] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_DECISION_FLT_2_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_DECISION_FLT_2_REG(r) (NBL_IPRO_MNG_DECISION_FLT_2_ADDR + \
+ (NBL_IPRO_MNG_DECISION_FLT_2_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_DMAC_FLT_0_ADDR (0xb04560)
+#define NBL_IPRO_MNG_DMAC_FLT_0_DEPTH (4)
+#define NBL_IPRO_MNG_DMAC_FLT_0_WIDTH (32)
+#define NBL_IPRO_MNG_DMAC_FLT_0_DWLEN (1)
+union ipro_mng_dmac_flt_0_u {
+ struct ipro_mng_dmac_flt_0 {
+ u32 data:16; /* [15:0] Default:0x0 RW */
+ u32 en:1; /* [16] Default:0x0 RW */
+ u32 rsv:15; /* [31:17] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_DMAC_FLT_0_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_DMAC_FLT_0_REG(r) (NBL_IPRO_MNG_DMAC_FLT_0_ADDR + \
+ (NBL_IPRO_MNG_DMAC_FLT_0_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_DMAC_FLT_1_ADDR (0xb04570)
+#define NBL_IPRO_MNG_DMAC_FLT_1_DEPTH (4)
+#define NBL_IPRO_MNG_DMAC_FLT_1_WIDTH (32)
+#define NBL_IPRO_MNG_DMAC_FLT_1_DWLEN (1)
+union ipro_mng_dmac_flt_1_u {
+ struct ipro_mng_dmac_flt_1 {
+ u32 data:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_DMAC_FLT_1_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_DMAC_FLT_1_REG(r) (NBL_IPRO_MNG_DMAC_FLT_1_ADDR + \
+ (NBL_IPRO_MNG_DMAC_FLT_1_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_VLAN_FLT_ADDR (0xb04580)
+#define NBL_IPRO_MNG_VLAN_FLT_DEPTH (8)
+#define NBL_IPRO_MNG_VLAN_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_VLAN_FLT_DWLEN (1)
+union ipro_mng_vlan_flt_u {
+ struct ipro_mng_vlan_flt {
+ u32 data:12; /* [11:0] Default:0x0 RW */
+ u32 sel:1; /* [12] Default:0x0 RW */
+ u32 nontag:1; /* [13] Default:0x0 RW */
+ u32 en:1; /* [14] Default:0x0 RW */
+ u32 rsv:17; /* [31:15] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_VLAN_FLT_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_VLAN_FLT_REG(r) (NBL_IPRO_MNG_VLAN_FLT_ADDR + \
+ (NBL_IPRO_MNG_VLAN_FLT_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_ETHERTYPE_FLT_ADDR (0xb045a0)
+#define NBL_IPRO_MNG_ETHERTYPE_FLT_DEPTH (4)
+#define NBL_IPRO_MNG_ETHERTYPE_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_ETHERTYPE_FLT_DWLEN (1)
+union ipro_mng_ethertype_flt_u {
+ struct ipro_mng_ethertype_flt {
+ u32 data:16; /* [15:0] Default:0x0 RW */
+ u32 en:1; /* [16] Default:0x0 RW */
+ u32 rsv:15; /* [31:17] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_ETHERTYPE_FLT_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_ETHERTYPE_FLT_REG(r) (NBL_IPRO_MNG_ETHERTYPE_FLT_ADDR + \
+ (NBL_IPRO_MNG_ETHERTYPE_FLT_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_IPV4_FLT_0_ADDR (0xb045b0)
+#define NBL_IPRO_MNG_IPV4_FLT_0_DEPTH (4)
+#define NBL_IPRO_MNG_IPV4_FLT_0_WIDTH (32)
+#define NBL_IPRO_MNG_IPV4_FLT_0_DWLEN (1)
+union ipro_mng_ipv4_flt_0_u {
+ struct ipro_mng_ipv4_flt_0 {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_IPV4_FLT_0_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_IPV4_FLT_0_REG(r) (NBL_IPRO_MNG_IPV4_FLT_0_ADDR + \
+ (NBL_IPRO_MNG_IPV4_FLT_0_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_IPV4_FLT_1_ADDR (0xb045c0)
+#define NBL_IPRO_MNG_IPV4_FLT_1_DEPTH (4)
+#define NBL_IPRO_MNG_IPV4_FLT_1_WIDTH (32)
+#define NBL_IPRO_MNG_IPV4_FLT_1_DWLEN (1)
+union ipro_mng_ipv4_flt_1_u {
+ struct ipro_mng_ipv4_flt_1 {
+ u32 data:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_IPV4_FLT_1_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_IPV4_FLT_1_REG(r) (NBL_IPRO_MNG_IPV4_FLT_1_ADDR + \
+ (NBL_IPRO_MNG_IPV4_FLT_1_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_IPV6_FLT_0_ADDR (0xb04600)
+#define NBL_IPRO_MNG_IPV6_FLT_0_DEPTH (4)
+#define NBL_IPRO_MNG_IPV6_FLT_0_WIDTH (32)
+#define NBL_IPRO_MNG_IPV6_FLT_0_DWLEN (1)
+union ipro_mng_ipv6_flt_0_u {
+ struct ipro_mng_ipv6_flt_0 {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:15; /* [15:1] Default:0x0 RO */
+ u32 mask:16; /* [31:16] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_IPV6_FLT_0_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_IPV6_FLT_0_REG(r) (NBL_IPRO_MNG_IPV6_FLT_0_ADDR + \
+ (NBL_IPRO_MNG_IPV6_FLT_0_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_IPV6_FLT_1_ADDR (0xb04610)
+#define NBL_IPRO_MNG_IPV6_FLT_1_DEPTH (4)
+#define NBL_IPRO_MNG_IPV6_FLT_1_WIDTH (32)
+#define NBL_IPRO_MNG_IPV6_FLT_1_DWLEN (1)
+union ipro_mng_ipv6_flt_1_u {
+ struct ipro_mng_ipv6_flt_1 {
+ u32 data:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_IPV6_FLT_1_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_IPV6_FLT_1_REG(r) (NBL_IPRO_MNG_IPV6_FLT_1_ADDR + \
+ (NBL_IPRO_MNG_IPV6_FLT_1_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_IPV6_FLT_2_ADDR (0xb04620)
+#define NBL_IPRO_MNG_IPV6_FLT_2_DEPTH (4)
+#define NBL_IPRO_MNG_IPV6_FLT_2_WIDTH (32)
+#define NBL_IPRO_MNG_IPV6_FLT_2_DWLEN (1)
+union ipro_mng_ipv6_flt_2_u {
+ struct ipro_mng_ipv6_flt_2 {
+ u32 data:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_IPV6_FLT_2_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_IPV6_FLT_2_REG(r) (NBL_IPRO_MNG_IPV6_FLT_2_ADDR + \
+ (NBL_IPRO_MNG_IPV6_FLT_2_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_IPV6_FLT_3_ADDR (0xb04630)
+#define NBL_IPRO_MNG_IPV6_FLT_3_DEPTH (4)
+#define NBL_IPRO_MNG_IPV6_FLT_3_WIDTH (32)
+#define NBL_IPRO_MNG_IPV6_FLT_3_DWLEN (1)
+union ipro_mng_ipv6_flt_3_u {
+ struct ipro_mng_ipv6_flt_3 {
+ u32 data:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_IPV6_FLT_3_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_IPV6_FLT_3_REG(r) (NBL_IPRO_MNG_IPV6_FLT_3_ADDR + \
+ (NBL_IPRO_MNG_IPV6_FLT_3_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_IPV6_FLT_4_ADDR (0xb04640)
+#define NBL_IPRO_MNG_IPV6_FLT_4_DEPTH (4)
+#define NBL_IPRO_MNG_IPV6_FLT_4_WIDTH (32)
+#define NBL_IPRO_MNG_IPV6_FLT_4_DWLEN (1)
+union ipro_mng_ipv6_flt_4_u {
+ struct ipro_mng_ipv6_flt_4 {
+ u32 data:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_IPV6_FLT_4_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_IPV6_FLT_4_REG(r) (NBL_IPRO_MNG_IPV6_FLT_4_ADDR + \
+ (NBL_IPRO_MNG_IPV6_FLT_4_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_PORT_FLT_ADDR (0xb04650)
+#define NBL_IPRO_MNG_PORT_FLT_DEPTH (16)
+#define NBL_IPRO_MNG_PORT_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_PORT_FLT_DWLEN (1)
+union ipro_mng_port_flt_u {
+ struct ipro_mng_port_flt {
+ u32 data:16; /* [15:0] Default:0x0 RW */
+ u32 en:1; /* [16] Default:0x0 RW */
+ u32 mode:1; /* [17] Default:0x0 RW */
+ u32 tcp:1; /* [18] Default:0x0 RW */
+ u32 udp:1; /* [19] Default:0x0 RW */
+ u32 rsv:12; /* [31:20] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_PORT_FLT_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_PORT_FLT_REG(r) (NBL_IPRO_MNG_PORT_FLT_ADDR + \
+ (NBL_IPRO_MNG_PORT_FLT_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_ARP_REQ_FLT_0_ADDR (0xb04690)
+#define NBL_IPRO_MNG_ARP_REQ_FLT_0_DEPTH (2)
+#define NBL_IPRO_MNG_ARP_REQ_FLT_0_WIDTH (32)
+#define NBL_IPRO_MNG_ARP_REQ_FLT_0_DWLEN (1)
+union ipro_mng_arp_req_flt_0_u {
+ struct ipro_mng_arp_req_flt_0 {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:15; /* [15:1] Default:0x0 RO */
+ u32 op:16; /* [31:16] Default:0x1 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_ARP_REQ_FLT_0_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_ARP_REQ_FLT_0_REG(r) (NBL_IPRO_MNG_ARP_REQ_FLT_0_ADDR + \
+ (NBL_IPRO_MNG_ARP_REQ_FLT_0_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_ARP_REQ_FLT_1_ADDR (0xb046a0)
+#define NBL_IPRO_MNG_ARP_REQ_FLT_1_DEPTH (2)
+#define NBL_IPRO_MNG_ARP_REQ_FLT_1_WIDTH (32)
+#define NBL_IPRO_MNG_ARP_REQ_FLT_1_DWLEN (1)
+union ipro_mng_arp_req_flt_1_u {
+ struct ipro_mng_arp_req_flt_1 {
+ u32 data:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_ARP_REQ_FLT_1_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_ARP_REQ_FLT_1_REG(r) (NBL_IPRO_MNG_ARP_REQ_FLT_1_ADDR + \
+ (NBL_IPRO_MNG_ARP_REQ_FLT_1_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_ARP_RSP_FLT_0_ADDR (0xb046b0)
+#define NBL_IPRO_MNG_ARP_RSP_FLT_0_DEPTH (2)
+#define NBL_IPRO_MNG_ARP_RSP_FLT_0_WIDTH (32)
+#define NBL_IPRO_MNG_ARP_RSP_FLT_0_DWLEN (1)
+union ipro_mng_arp_rsp_flt_0_u {
+ struct ipro_mng_arp_rsp_flt_0 {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:15; /* [15:1] Default:0x0 RO */
+ u32 op:16; /* [31:16] Default:0x2 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_ARP_RSP_FLT_0_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_ARP_RSP_FLT_0_REG(r) (NBL_IPRO_MNG_ARP_RSP_FLT_0_ADDR + \
+ (NBL_IPRO_MNG_ARP_RSP_FLT_0_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_ARP_RSP_FLT_1_ADDR (0xb046c0)
+#define NBL_IPRO_MNG_ARP_RSP_FLT_1_DEPTH (2)
+#define NBL_IPRO_MNG_ARP_RSP_FLT_1_WIDTH (32)
+#define NBL_IPRO_MNG_ARP_RSP_FLT_1_DWLEN (1)
+union ipro_mng_arp_rsp_flt_1_u {
+ struct ipro_mng_arp_rsp_flt_1 {
+ u32 data:32; /* [31:0] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_ARP_RSP_FLT_1_DWLEN];
+} __packed;
+#define NBL_IPRO_MNG_ARP_RSP_FLT_1_REG(r) (NBL_IPRO_MNG_ARP_RSP_FLT_1_ADDR + \
+ (NBL_IPRO_MNG_ARP_RSP_FLT_1_DWLEN * 4) * (r))
+
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_86_ADDR (0xb046d0)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_86_DEPTH (1)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_86_WIDTH (32)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_86_DWLEN (1)
+union ipro_mng_neighbor_flt_86_u {
+ struct ipro_mng_neighbor_flt_86 {
+ u32 data:8; /* [7:0] Default:0x86 RW */
+ u32 en:1; /* [8] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_NEIGHBOR_FLT_86_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_87_ADDR (0xb046d4)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_87_DEPTH (1)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_87_WIDTH (32)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_87_DWLEN (1)
+union ipro_mng_neighbor_flt_87_u {
+ struct ipro_mng_neighbor_flt_87 {
+ u32 data:8; /* [7:0] Default:0x87 RW */
+ u32 en:1; /* [8] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_NEIGHBOR_FLT_87_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_88_ADDR (0xb046d8)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_88_DEPTH (1)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_88_WIDTH (32)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_88_DWLEN (1)
+union ipro_mng_neighbor_flt_88_u {
+ struct ipro_mng_neighbor_flt_88 {
+ u32 data:8; /* [7:0] Default:0x88 RW */
+ u32 en:1; /* [8] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_NEIGHBOR_FLT_88_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_89_ADDR (0xb046dc)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_89_DEPTH (1)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_89_WIDTH (32)
+#define NBL_IPRO_MNG_NEIGHBOR_FLT_89_DWLEN (1)
+union ipro_mng_neighbor_flt_89_u {
+ struct ipro_mng_neighbor_flt_89 {
+ u32 data:8; /* [7:0] Default:0x89 RW */
+ u32 en:1; /* [8] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_NEIGHBOR_FLT_89_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_MLD_FLT_82_ADDR (0xb046e0)
+#define NBL_IPRO_MNG_MLD_FLT_82_DEPTH (1)
+#define NBL_IPRO_MNG_MLD_FLT_82_WIDTH (32)
+#define NBL_IPRO_MNG_MLD_FLT_82_DWLEN (1)
+union ipro_mng_mld_flt_82_u {
+ struct ipro_mng_mld_flt_82 {
+ u32 data:8; /* [7:0] Default:0x82 RW */
+ u32 en:1; /* [8] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_MLD_FLT_82_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_MLD_FLT_83_ADDR (0xb046e4)
+#define NBL_IPRO_MNG_MLD_FLT_83_DEPTH (1)
+#define NBL_IPRO_MNG_MLD_FLT_83_WIDTH (32)
+#define NBL_IPRO_MNG_MLD_FLT_83_DWLEN (1)
+union ipro_mng_mld_flt_83_u {
+ struct ipro_mng_mld_flt_83 {
+ u32 data:8; /* [7:0] Default:0x83 RW */
+ u32 en:1; /* [8] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_MLD_FLT_83_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_MLD_FLT_84_ADDR (0xb046e8)
+#define NBL_IPRO_MNG_MLD_FLT_84_DEPTH (1)
+#define NBL_IPRO_MNG_MLD_FLT_84_WIDTH (32)
+#define NBL_IPRO_MNG_MLD_FLT_84_DWLEN (1)
+union ipro_mng_mld_flt_84_u {
+ struct ipro_mng_mld_flt_84 {
+ u32 data:8; /* [7:0] Default:0x84 RW */
+ u32 en:1; /* [8] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_MLD_FLT_84_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_MLD_FLT_8F_ADDR (0xb046ec)
+#define NBL_IPRO_MNG_MLD_FLT_8F_DEPTH (1)
+#define NBL_IPRO_MNG_MLD_FLT_8F_WIDTH (32)
+#define NBL_IPRO_MNG_MLD_FLT_8F_DWLEN (1)
+union ipro_mng_mld_flt_8f_u {
+ struct ipro_mng_mld_flt_8f {
+ u32 data:8; /* [7:0] Default:0x8f RW */
+ u32 en:1; /* [8] Default:0x0 RW */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_MLD_FLT_8F_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_ICMPV4_FLT_ADDR (0xb046f0)
+#define NBL_IPRO_MNG_ICMPV4_FLT_DEPTH (1)
+#define NBL_IPRO_MNG_ICMPV4_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_ICMPV4_FLT_DWLEN (1)
+union ipro_mng_icmpv4_flt_u {
+ struct ipro_mng_icmpv4_flt {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_ICMPV4_FLT_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_BRCAST_FLT_ADDR (0xb04700)
+#define NBL_IPRO_MNG_BRCAST_FLT_DEPTH (1)
+#define NBL_IPRO_MNG_BRCAST_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_BRCAST_FLT_DWLEN (1)
+union ipro_mng_brcast_flt_u {
+ struct ipro_mng_brcast_flt {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_BRCAST_FLT_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_MULCAST_FLT_ADDR (0xb04704)
+#define NBL_IPRO_MNG_MULCAST_FLT_DEPTH (1)
+#define NBL_IPRO_MNG_MULCAST_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_MULCAST_FLT_DWLEN (1)
+union ipro_mng_mulcast_flt_u {
+ struct ipro_mng_mulcast_flt {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_MULCAST_FLT_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_FLOW_CTRL_FLT_ADDR (0xb04710)
+#define NBL_IPRO_MNG_FLOW_CTRL_FLT_DEPTH (1)
+#define NBL_IPRO_MNG_FLOW_CTRL_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_FLOW_CTRL_FLT_DWLEN (1)
+union ipro_mng_flow_ctrl_flt_u {
+ struct ipro_mng_flow_ctrl_flt {
+ u32 data:16; /* [15:0] Default:0x8808 RW */
+ u32 en:1; /* [16] Default:0x0 RW */
+ u32 bow:1; /* [17] Default:0x0 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_FLOW_CTRL_FLT_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_NCSI_FLT_ADDR (0xb04714)
+#define NBL_IPRO_MNG_NCSI_FLT_DEPTH (1)
+#define NBL_IPRO_MNG_NCSI_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_NCSI_FLT_DWLEN (1)
+union ipro_mng_ncsi_flt_u {
+ struct ipro_mng_ncsi_flt {
+ u32 data:16; /* [15:0] Default:0x88F8 RW */
+ u32 en:1; /* [16] Default:0x0 RW */
+ u32 bow:1; /* [17] Default:0x1 RW */
+ u32 rsv:14; /* [31:18] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_NCSI_FLT_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_PKT_LEN_FLT_ADDR (0xb04720)
+#define NBL_IPRO_MNG_PKT_LEN_FLT_DEPTH (1)
+#define NBL_IPRO_MNG_PKT_LEN_FLT_WIDTH (32)
+#define NBL_IPRO_MNG_PKT_LEN_FLT_DWLEN (1)
+union ipro_mng_pkt_len_flt_u {
+ struct ipro_mng_pkt_len_flt {
+ u32 max:16; /* [15:0] Default:0x800 RW */
+ u32 en:1; /* [16] Default:0x0 RW */
+ u32 rsv:15; /* [31:17] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_PKT_LEN_FLT_DWLEN];
+} __packed;
+
+#define NBL_IPRO_FLOW_STOP_ADDR (0xb04810)
+#define NBL_IPRO_FLOW_STOP_DEPTH (1)
+#define NBL_IPRO_FLOW_STOP_WIDTH (32)
+#define NBL_IPRO_FLOW_STOP_DWLEN (1)
+union ipro_flow_stop_u {
+ struct ipro_flow_stop {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_FLOW_STOP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_TOKEN_NUM_ADDR (0xb04814)
+#define NBL_IPRO_TOKEN_NUM_DEPTH (1)
+#define NBL_IPRO_TOKEN_NUM_WIDTH (32)
+#define NBL_IPRO_TOKEN_NUM_DWLEN (1)
+union ipro_token_num_u {
+ struct ipro_token_num {
+ u32 dn_cnt:8; /* [7:0] Default:0x80 RO */
+ u32 up_cnt:8; /* [15:8] Default:0x80 RO */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_TOKEN_NUM_DWLEN];
+} __packed;
+
+#define NBL_IPRO_BYPASS_ADDR (0xb04818)
+#define NBL_IPRO_BYPASS_DEPTH (1)
+#define NBL_IPRO_BYPASS_WIDTH (32)
+#define NBL_IPRO_BYPASS_DWLEN (1)
+union ipro_bypass_u {
+ struct ipro_bypass {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_BYPASS_DWLEN];
+} __packed;
+
+#define NBL_IPRO_RR_REQ_MASK_ADDR (0xb0481c)
+#define NBL_IPRO_RR_REQ_MASK_DEPTH (1)
+#define NBL_IPRO_RR_REQ_MASK_WIDTH (32)
+#define NBL_IPRO_RR_REQ_MASK_DWLEN (1)
+union ipro_rr_req_mask_u {
+ struct ipro_rr_req_mask {
+ u32 dn:1; /* [0] Default:0x0 RW */
+ u32 up:1; /* [1] Default:0x0 RW */
+ u32 rsv:30; /* [31:2] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_RR_REQ_MASK_DWLEN];
+} __packed;
+
+#define NBL_IPRO_BP_STATE_ADDR (0xb04828)
+#define NBL_IPRO_BP_STATE_DEPTH (1)
+#define NBL_IPRO_BP_STATE_WIDTH (32)
+#define NBL_IPRO_BP_STATE_DWLEN (1)
+union ipro_bp_state_u {
+ struct ipro_bp_state {
+ u32 pp_up_link_fc:1; /* [0] Default:0x0 RO */
+ u32 pp_dn_link_fc:1; /* [1] Default:0x0 RO */
+ u32 pp_up_creadit:1; /* [2] Default:0x0 RO */
+ u32 pp_dn_creadit:1; /* [3] Default:0x0 RO */
+ u32 mcc_up_creadit:1; /* [4] Default:0x0 RO */
+ u32 mcc_dn_creadit:1; /* [5] Default:0x0 RO */
+ u32 pp_rdy:1; /* [6] Default:0x1 RO */
+ u32 dn_rdy:1; /* [7] Default:0x1 RO */
+ u32 up_rdy:1; /* [8] Default:0x1 RO */
+ u32 rsv:23; /* [31:9] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_BP_STATE_DWLEN];
+} __packed;
+
+#define NBL_IPRO_BP_HISTORY_ADDR (0xb0482c)
+#define NBL_IPRO_BP_HISTORY_DEPTH (1)
+#define NBL_IPRO_BP_HISTORY_WIDTH (32)
+#define NBL_IPRO_BP_HISTORY_DWLEN (1)
+union ipro_bp_history_u {
+ struct ipro_bp_history {
+ u32 pp_rdy:1; /* [0] Default:0x0 RC */
+ u32 dn_rdy:1; /* [1] Default:0x0 RC */
+ u32 up_rdy:1; /* [2] Default:0x0 RC */
+ u32 rsv:29; /* [31:3] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_BP_HISTORY_DWLEN];
+} __packed;
+
+#define NBL_IPRO_ERRCODE_TBL_DROP_ADDR (0xb0486c)
+#define NBL_IPRO_ERRCODE_TBL_DROP_DEPTH (1)
+#define NBL_IPRO_ERRCODE_TBL_DROP_WIDTH (32)
+#define NBL_IPRO_ERRCODE_TBL_DROP_DWLEN (1)
+union ipro_errcode_tbl_drop_u {
+ struct ipro_errcode_tbl_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_ERRCODE_TBL_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_SPORT_TBL_DROP_ADDR (0xb04870)
+#define NBL_IPRO_SPORT_TBL_DROP_DEPTH (1)
+#define NBL_IPRO_SPORT_TBL_DROP_WIDTH (32)
+#define NBL_IPRO_SPORT_TBL_DROP_DWLEN (1)
+union ipro_sport_tbl_drop_u {
+ struct ipro_sport_tbl_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_SPORT_TBL_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_PTYPE_TBL_DROP_ADDR (0xb04874)
+#define NBL_IPRO_PTYPE_TBL_DROP_DEPTH (1)
+#define NBL_IPRO_PTYPE_TBL_DROP_WIDTH (32)
+#define NBL_IPRO_PTYPE_TBL_DROP_DWLEN (1)
+union ipro_ptype_tbl_drop_u {
+ struct ipro_ptype_tbl_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_PTYPE_TBL_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_UDL_DROP_ADDR (0xb04878)
+#define NBL_IPRO_UDL_DROP_DEPTH (1)
+#define NBL_IPRO_UDL_DROP_WIDTH (32)
+#define NBL_IPRO_UDL_DROP_DWLEN (1)
+union ipro_udl_drop_u {
+ struct ipro_udl_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_UDL_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_ANTIFAKE_DROP_ADDR (0xb0487c)
+#define NBL_IPRO_ANTIFAKE_DROP_DEPTH (1)
+#define NBL_IPRO_ANTIFAKE_DROP_WIDTH (32)
+#define NBL_IPRO_ANTIFAKE_DROP_DWLEN (1)
+union ipro_antifake_drop_u {
+ struct ipro_antifake_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_ANTIFAKE_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_VLAN_NUM_DROP_ADDR (0xb04880)
+#define NBL_IPRO_VLAN_NUM_DROP_DEPTH (1)
+#define NBL_IPRO_VLAN_NUM_DROP_WIDTH (32)
+#define NBL_IPRO_VLAN_NUM_DROP_DWLEN (1)
+union ipro_vlan_num_drop_u {
+ struct ipro_vlan_num_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_VLAN_NUM_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_TCP_STATE_DROP_ADDR (0xb04884)
+#define NBL_IPRO_TCP_STATE_DROP_DEPTH (1)
+#define NBL_IPRO_TCP_STATE_DROP_WIDTH (32)
+#define NBL_IPRO_TCP_STATE_DROP_DWLEN (1)
+union ipro_tcp_state_drop_u {
+ struct ipro_tcp_state_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_TCP_STATE_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_RAM_ERR_DROP_ADDR (0xb04888)
+#define NBL_IPRO_RAM_ERR_DROP_DEPTH (1)
+#define NBL_IPRO_RAM_ERR_DROP_WIDTH (32)
+#define NBL_IPRO_RAM_ERR_DROP_DWLEN (1)
+union ipro_ram_err_drop_u {
+ struct ipro_ram_err_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_RAM_ERR_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_KG_MISS_ADDR (0xb0488c)
+#define NBL_IPRO_KG_MISS_DEPTH (1)
+#define NBL_IPRO_KG_MISS_WIDTH (32)
+#define NBL_IPRO_KG_MISS_DWLEN (1)
+union ipro_kg_miss_u {
+ struct ipro_kg_miss {
+ u32 drop_cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 cnt:16; /* [31:16] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_IPRO_KG_MISS_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MNG_DROP_ADDR (0xb04890)
+#define NBL_IPRO_MNG_DROP_DEPTH (1)
+#define NBL_IPRO_MNG_DROP_WIDTH (32)
+#define NBL_IPRO_MNG_DROP_DWLEN (1)
+union ipro_mng_drop_u {
+ struct ipro_mng_drop {
+ u32 cnt:16; /* [15:0] Default:0x0 SCTR */
+ u32 rsv:16; /* [31:16] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_MNG_DROP_DWLEN];
+} __packed;
+
+#define NBL_IPRO_MTU_CHECK_DROP_ADDR (0xb04900)
+#define NBL_IPRO_MTU_CHECK_DROP_DEPTH (256)
+#define NBL_IPRO_MTU_CHECK_DROP_WIDTH (32)
+#define NBL_IPRO_MTU_CHECK_DROP_DWLEN (1)
+union ipro_mtu_check_drop_u {
+ struct ipro_mtu_check_drop {
+ u32 vsi_3:8; /* [7:0] Default:0x0 SCTR */
+ u32 vsi_2:8; /* [15:8] Default:0x0 SCTR */
+ u32 vsi_1:8; /* [23:16] Default:0x0 SCTR */
+ u32 vsi_0:8; /* [31:24] Default:0x0 SCTR */
+ } __packed info;
+ u32 data[NBL_IPRO_MTU_CHECK_DROP_DWLEN];
+} __packed;
+#define NBL_IPRO_MTU_CHECK_DROP_REG(r) (NBL_IPRO_MTU_CHECK_DROP_ADDR + \
+ (NBL_IPRO_MTU_CHECK_DROP_DWLEN * 4) * (r))
+
+#define NBL_IPRO_LAST_QUEUE_RAM_ERR_ADDR (0xb04d08)
+#define NBL_IPRO_LAST_QUEUE_RAM_ERR_DEPTH (1)
+#define NBL_IPRO_LAST_QUEUE_RAM_ERR_WIDTH (32)
+#define NBL_IPRO_LAST_QUEUE_RAM_ERR_DWLEN (1)
+union ipro_last_queue_ram_err_u {
+ struct ipro_last_queue_ram_err {
+ u32 info:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_LAST_QUEUE_RAM_ERR_DWLEN];
+} __packed;
+
+#define NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_ADDR (0xb04d0c)
+#define NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_DEPTH (1)
+#define NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_WIDTH (32)
+#define NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_DWLEN (1)
+union ipro_last_dn_src_port_ram_err_u {
+ struct ipro_last_dn_src_port_ram_err {
+ u32 info:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_DWLEN];
+} __packed;
+
+#define NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_ADDR (0xb04d10)
+#define NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_DEPTH (1)
+#define NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_WIDTH (32)
+#define NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_DWLEN (1)
+union ipro_last_up_src_port_ram_err_u {
+ struct ipro_last_up_src_port_ram_err {
+ u32 info:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_DWLEN];
+} __packed;
+
+#define NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_ADDR (0xb04d14)
+#define NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_DEPTH (1)
+#define NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_WIDTH (32)
+#define NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_DWLEN (1)
+union ipro_last_dn_ptype_ram_err_u {
+ struct ipro_last_dn_ptype_ram_err {
+ u32 info:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_DWLEN];
+} __packed;
+
+#define NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_ADDR (0xb04d18)
+#define NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_DEPTH (1)
+#define NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_WIDTH (32)
+#define NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_DWLEN (1)
+union ipro_last_up_ptype_ram_err_u {
+ struct ipro_last_up_ptype_ram_err {
+ u32 info:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_DWLEN];
+} __packed;
+
+#define NBL_IPRO_LAST_KG_PROF_RAM_ERR_ADDR (0xb04d20)
+#define NBL_IPRO_LAST_KG_PROF_RAM_ERR_DEPTH (1)
+#define NBL_IPRO_LAST_KG_PROF_RAM_ERR_WIDTH (32)
+#define NBL_IPRO_LAST_KG_PROF_RAM_ERR_DWLEN (1)
+union ipro_last_kg_prof_ram_err_u {
+ struct ipro_last_kg_prof_ram_err {
+ u32 info:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_LAST_KG_PROF_RAM_ERR_DWLEN];
+} __packed;
+
+#define NBL_IPRO_LAST_ERRCODE_RAM_ERR_ADDR (0xb04d28)
+#define NBL_IPRO_LAST_ERRCODE_RAM_ERR_DEPTH (1)
+#define NBL_IPRO_LAST_ERRCODE_RAM_ERR_WIDTH (32)
+#define NBL_IPRO_LAST_ERRCODE_RAM_ERR_DWLEN (1)
+union ipro_last_errcode_ram_err_u {
+ struct ipro_last_errcode_ram_err {
+ u32 info:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_LAST_ERRCODE_RAM_ERR_DWLEN];
+} __packed;
+
+#define NBL_IPRO_IN_PKT_CAP_EN_ADDR (0xb04dfc)
+#define NBL_IPRO_IN_PKT_CAP_EN_DEPTH (1)
+#define NBL_IPRO_IN_PKT_CAP_EN_WIDTH (32)
+#define NBL_IPRO_IN_PKT_CAP_EN_DWLEN (1)
+union ipro_in_pkt_cap_en_u {
+ struct ipro_in_pkt_cap_en {
+ u32 en:1; /* [0] Default:0x0 RW */
+ u32 rsv:31; /* [31:1] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_IN_PKT_CAP_EN_DWLEN];
+} __packed;
+
+#define NBL_IPRO_IN_PKT_CAP_ADDR (0xb04e00)
+#define NBL_IPRO_IN_PKT_CAP_DEPTH (64)
+#define NBL_IPRO_IN_PKT_CAP_WIDTH (32)
+#define NBL_IPRO_IN_PKT_CAP_DWLEN (1)
+union ipro_in_pkt_cap_u {
+ struct ipro_in_pkt_cap {
+ u32 data:32; /* [31:0] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_IN_PKT_CAP_DWLEN];
+} __packed;
+#define NBL_IPRO_IN_PKT_CAP_REG(r) (NBL_IPRO_IN_PKT_CAP_ADDR + \
+ (NBL_IPRO_IN_PKT_CAP_DWLEN * 4) * (r))
+
+#define NBL_IPRO_ERRCODE_TBL_ADDR (0xb05000)
+#define NBL_IPRO_ERRCODE_TBL_DEPTH (16)
+#define NBL_IPRO_ERRCODE_TBL_WIDTH (64)
+#define NBL_IPRO_ERRCODE_TBL_DWLEN (2)
+union ipro_errcode_tbl_u {
+ struct ipro_errcode_tbl {
+ u32 dqueue:11; /* [10:0] Default:0x0 RW */
+ u32 dqueue_en:1; /* [11] Default:0x0 RW */
+ u32 dqueue_pri:2; /* [13:12] Default:0x0 RW */
+ u32 set_dport_pri:2; /* [15:14] Default:0x0 RW */
+ u32 set_dport:16; /* [31:16] Default:0x0 RW */
+ u32 set_dport_en:1; /* [32] Default:0x0 RW */
+ u32 proc_done:1; /* [33] Default:0x0 RW */
+ u32 vld:1; /* [34] Default:0x0 RW */
+ u32 rsv:29; /* [63:35] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_ERRCODE_TBL_DWLEN];
+} __packed;
+#define NBL_IPRO_ERRCODE_TBL_REG(r) (NBL_IPRO_ERRCODE_TBL_ADDR + \
+ (NBL_IPRO_ERRCODE_TBL_DWLEN * 4) * (r))
+
+#define NBL_IPRO_DN_PTYPE_TBL_ADDR (0xb06000)
+#define NBL_IPRO_DN_PTYPE_TBL_DEPTH (256)
+#define NBL_IPRO_DN_PTYPE_TBL_WIDTH (64)
+#define NBL_IPRO_DN_PTYPE_TBL_DWLEN (2)
+union ipro_dn_ptype_tbl_u {
+ struct ipro_dn_ptype_tbl {
+ u32 dn_entry_vld:1; /* [0] Default:0x0 RW */
+ u32 dn_mirror_en:1; /* [1] Default:0x0 RW */
+ u32 dn_mirror_pri:2; /* [3:2] Default:0x0 RW */
+ u32 dn_mirror_id:4; /* [7:4] Default:0x0 RW */
+ u32 dn_encap_en:1; /* [8] Default:0x0 RW */
+ u32 dn_encap_pri:2; /* [10:9] Default:0x0 RW */
+ u32 dn_encap_index:13; /* [23:11] Default:0x0 RW */
+ u32 not_used_0:6; /* [29:24] Default:0x0 RW */
+ u32 proc_done:1; /* [30] Default:0x0 RW */
+ u32 set_dport_en:1; /* [31] Default:0x0 RW */
+ u32 set_dport:16; /* [47:32] Default:0x0 RW */
+ u32 set_dport_pri:2; /* [49:48] Default:0x0 RW */
+ u32 dqueue_pri:2; /* [51:50] Default:0x0 RW */
+ u32 dqueue:11; /* [62:52] Default:0x0 RW */
+ u32 dqueue_en:1; /* [63] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_DN_PTYPE_TBL_DWLEN];
+} __packed;
+#define NBL_IPRO_DN_PTYPE_TBL_REG(r) (NBL_IPRO_DN_PTYPE_TBL_ADDR + \
+ (NBL_IPRO_DN_PTYPE_TBL_DWLEN * 4) * (r))
+
+#define NBL_IPRO_UP_PTYPE_TBL_ADDR (0xb06800)
+#define NBL_IPRO_UP_PTYPE_TBL_DEPTH (256)
+#define NBL_IPRO_UP_PTYPE_TBL_WIDTH (64)
+#define NBL_IPRO_UP_PTYPE_TBL_DWLEN (2)
+union ipro_up_ptype_tbl_u {
+ struct ipro_up_ptype_tbl {
+ u32 up_entry_vld:1; /* [0] Default:0x0 RW */
+ u32 up_mirror_en:1; /* [1] Default:0x0 RW */
+ u32 up_mirror_pri:2; /* [3:2] Default:0x0 RW */
+ u32 up_mirror_id:4; /* [7:4] Default:0x0 RW */
+ u32 up_decap_en:1; /* [8] Default:0x0 RW */
+ u32 up_decap_pri:2; /* [10:9] Default:0x0 RW */
+ u32 not_used_1:19; /* [29:11] Default:0x0 RW */
+ u32 proc_done:1; /* [30] Default:0x0 RW */
+ u32 set_dport_en:1; /* [31] Default:0x0 RW */
+ u32 set_dport:16; /* [47:32] Default:0x0 RW */
+ u32 set_dport_pri:2; /* [49:48] Default:0x0 RW */
+ u32 dqueue_pri:2; /* [51:50] Default:0x0 RW */
+ u32 dqueue:11; /* [62:52] Default:0x0 RW */
+ u32 dqueue_en:1; /* [63] Default:0x0 RW */
+ } __packed info;
+ u32 data[NBL_IPRO_UP_PTYPE_TBL_DWLEN];
+} __packed;
+#define NBL_IPRO_UP_PTYPE_TBL_REG(r) (NBL_IPRO_UP_PTYPE_TBL_ADDR + \
+ (NBL_IPRO_UP_PTYPE_TBL_DWLEN * 4) * (r))
+
+#define NBL_IPRO_QUEUE_TBL_ADDR (0xb08000)
+#define NBL_IPRO_QUEUE_TBL_DEPTH (2048)
+#define NBL_IPRO_QUEUE_TBL_WIDTH (32)
+#define NBL_IPRO_QUEUE_TBL_DWLEN (1)
+union ipro_queue_tbl_u {
+ struct ipro_queue_tbl {
+ u32 vsi:10; /* [9:0] Default:0x0 RW */
+ u32 vsi_en:1; /* [10] Default:0x0 RW */
+ u32 rsv:21; /* [31:11] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_QUEUE_TBL_DWLEN];
+} __packed;
+#define NBL_IPRO_QUEUE_TBL_REG(r) (NBL_IPRO_QUEUE_TBL_ADDR + \
+ (NBL_IPRO_QUEUE_TBL_DWLEN * 4) * (r))
+
+#define NBL_IPRO_UP_SRC_PORT_TBL_ADDR (0xb0b000)
+#define NBL_IPRO_UP_SRC_PORT_TBL_DEPTH (4)
+#define NBL_IPRO_UP_SRC_PORT_TBL_WIDTH (64)
+#define NBL_IPRO_UP_SRC_PORT_TBL_DWLEN (2)
+union ipro_up_src_port_tbl_u {
+ struct ipro_up_src_port_tbl {
+ u32 entry_vld:1; /* [0] Default:0x0 RW */
+ u32 vlan_layer_num_0:2; /* [2:1] Default:0x0 RW */
+ u32 vlan_layer_num_1:2; /* [4:3] Default:0x0 RW */
+ u32 lag_vld:1; /* [5] Default:0x0 RW */
+ u32 lag_id:2; /* [7:6] Default:0x0 RW */
+ u32 hw_flow:1; /* [8] Default:0x0 RW */
+ u32 mirror_en:1; /* [9] Default:0x0 RW */
+ u32 mirror_pr:2; /* [11:10] Default:0x0 RW */
+ u32 mirror_id:4; /* [15:12] Default:0x0 RW */
+ u32 dqueue_pri:2; /* [17:16] Default:0x0 RW */
+ u32 set_dport_pri:2; /* [19:18] Default:0x0 RW */
+ u32 dqueue:11; /* [30:20] Default:0x0 RW */
+ u32 dqueue_en:1; /* [31] Default:0x0 RW */
+ u32 set_dport:16; /* [47:32] Default:0x0 RW */
+ u32 set_dport_en:1; /* [48] Default:0x0 RW */
+ u32 proc_done:1; /* [49] Default:0x0 RW */
+ u32 car_en:1; /* [50] Default:0x0 RW */
+ u32 car_pr:2; /* [52:51] Default:0x0 RW */
+ u32 car_id:10; /* [62:53] Default:0x0 RW */
+ u32 rsv:1; /* [63] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_UP_SRC_PORT_TBL_DWLEN];
+} __packed;
+#define NBL_IPRO_UP_SRC_PORT_TBL_REG(r) (NBL_IPRO_UP_SRC_PORT_TBL_ADDR + \
+ (NBL_IPRO_UP_SRC_PORT_TBL_DWLEN * 4) * (r))
+
+#define NBL_IPRO_DN_SRC_PORT_TBL_ADDR (0xb0c000)
+#define NBL_IPRO_DN_SRC_PORT_TBL_DEPTH (1024)
+#define NBL_IPRO_DN_SRC_PORT_TBL_WIDTH (128)
+#define NBL_IPRO_DN_SRC_PORT_TBL_DWLEN (4)
+union ipro_dn_src_port_tbl_u {
+ struct ipro_dn_src_port_tbl {
+ u32 entry_vld:1; /* [0] Default:0x0 RW */
+ u32 mirror_en:1; /* [1] Default:0x0 RW */
+ u32 mirror_pr:2; /* [3:2] Default:0x0 RW */
+ u32 mirror_id:4; /* [7:4] Default:0x0 RW */
+ u32 vlan_layer_num_1:2; /* [9:8] Default:0x0 RW */
+ u32 hw_flow:1; /* [10] Default:0x0 RW */
+ u32 mtu_sel:4; /* [14:11] Default:0x0 RW */
+ u32 addr_check_en:1; /* [15] Default:0x0 RW */
+ u32 smac_l:32; /* [63:16] Default:0x0 RW */
+ u32 smac_h:16; /* [63:16] Default:0x0 RW */
+ u32 dqueue:11; /* [74:64] Default:0x0 RW */
+ u32 dqueue_en:1; /* [75] Default:0x0 RW */
+ u32 dqueue_pri:2; /* [77:76] Default:0x0 RW */
+ u32 set_dport_pri:2; /* [79:78] Default:0x0 RW */
+ u32 set_dport:16; /* [95:80] Default:0x0 RW */
+ u32 set_dport_en:1; /* [96] Default:0x0 RW */
+ u32 proc_done:1; /* [97] Default:0x0 RW */
+ u32 not_used_1:2; /* [99:98] Default:0x0 RW */
+ u32 rsv:28; /* [127:100] Default:0x0 RO */
+ } __packed info;
+ u32 data[NBL_IPRO_DN_SRC_PORT_TBL_DWLEN];
+} __packed;
+#define NBL_IPRO_DN_SRC_PORT_TBL_REG(r) (NBL_IPRO_DN_SRC_PORT_TBL_ADDR + \
+ (NBL_IPRO_DN_SRC_PORT_TBL_DWLEN * 4) * (r))
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h
index b078b765f772..88246184a4cd 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h
@@ -8,6 +8,1135 @@
#define _NBL_HW_LEONIS_H_
#include "nbl_core.h"
+#include "nbl_hw.h"
#include "nbl_hw_reg.h"
+#define NBL_DRAIN_WAIT_TIMES (30000)
+#define NBL_DRIVER_STATUS_REG (0x1300444)
+#define NBL_DRIVER_STATUS_BIT (16)
+
+#pragma pack(1)
+
+/* ---------- REG BASE ADDR ---------- */
+#define NBL_LB_PCIEX16_TOP_BASE (0x01500000)
+/* PPE modules base addr */
+#define NBL_PPE_FEM_BASE (0x00a04000)
+#define NBL_PPE_IPRO_BASE (0x00b04000)
+#define NBL_PPE_PP0_BASE (0x00b14000)
+#define NBL_PPE_PP1_BASE (0x00b24000)
+#define NBL_PPE_PP2_BASE (0x00b34000)
+#define NBL_PPE_MCC_BASE (0x00b44000)
+#define NBL_PPE_ACL_BASE (0x00b64000)
+#define NBL_PPE_CAP_BASE (0x00e64000)
+#define NBL_PPE_EPRO_BASE (0x00e74000)
+#define NBL_PPE_DPRBAC_BASE (0x00904000)
+#define NBL_PPE_UPRBAC_BASE (0x0000C000)
+/* Interface modules base addr */
+#define NBL_INTF_HOST_PCOMPLETER_BASE (0x00f08000)
+#define NBL_INTF_HOST_PADPT_BASE (0x00f4c000)
+#define NBL_INTF_HOST_CTRLQ_BASE (0x00f8c000)
+#define NBL_INTF_HOST_VDPA_NET_BASE (0x00f98000)
+#define NBL_INTF_HOST_CMDQ_BASE (0x00fa0000)
+#define NBL_INTF_HOST_MAILBOX_BASE (0x00fb0000)
+#define NBL_INTF_HOST_PCIE_BASE (0X01504000)
+#define NBL_INTF_HOST_PCAP_BASE (0X015a4000)
+/* DP modules base addr */
+#define NBL_DP_URMUX_BASE (0x00008000)
+#define NBL_DP_UPRBAC_BASE (0x0000C000)
+#define NBL_DP_UPA_BASE (0x0008C000)
+#define NBL_DP_USTORE_BASE (0x00104000)
+#define NBL_DP_UPMEM_BASE (0x00108000)
+#define NBL_DP_UBM_BASE (0x0010c000)
+#define NBL_DP_UQM_BASE (0x00114000)
+#define NBL_DP_USTAT_BASE (0x0011c000)
+#define NBL_DP_UPED_BASE (0x0015c000)
+#define NBL_DP_UCAR_BASE (0x00e84000)
+#define NBL_DP_UL4S_BASE (0x00204000)
+#define NBL_DP_UVN_BASE (0x00244000)
+#define NBL_DP_DSCH_BASE (0x00404000)
+#define NBL_DP_SHAPING_BASE (0x00504000)
+#define NBL_DP_DVN_BASE (0x00514000)
+#define NBL_DP_DL4S_BASE (0x00614000)
+#define NBL_DP_DRMUX_BASE (0x00654000)
+#define NBL_DP_DSTORE_BASE (0x00704000)
+#define NBL_DP_DPMEM_BASE (0x00708000)
+#define NBL_DP_DBM_BASE (0x0070c000)
+#define NBL_DP_DQM_BASE (0x00714000)
+#define NBL_DP_DSTAT_BASE (0x0071c000)
+#define NBL_DP_DPED_BASE (0x0075c000)
+#define NBL_DP_DPA_BASE (0x0085c000)
+#define NBL_DP_DPRBAC_BASE (0x00904000)
+#define NBL_DP_DDMUX_BASE (0x00984000)
+#define NBL_DP_LB_DDP_BUF_BASE (0x00000000)
+#define NBL_DP_LB_DDP_OUT_BASE (0x00000000)
+#define NBL_DP_LB_DDP_DIST_BASE (0x00000000)
+#define NBL_DP_LB_DDP_IN_BASE (0x00000000)
+#define NBL_DP_LB_UDP_BUF_BASE (0x00000000)
+#define NBL_DP_LB_UDP_OUT_BASE (0x00000000)
+#define NBL_DP_LB_UDP_DIST_BASE (0x00000000)
+#define NBL_DP_LB_UDP_IN_BASE (0x00000000)
+#define NBL_DP_DL4S_BASE (0x00614000)
+#define NBL_DP_UL4S_BASE (0x00204000)
+
+/* -------- LB -------- */
+#define NBL_LB_PF_CONFIGSPACE_SELECT_OFFSET (0x81100000)
+#define NBL_LB_PF_CONFIGSPACE_SELECT_STRIDE (0x00100000)
+#define NBL_LB_PF_CONFIGSPACE_BASE_ADDR (NBL_LB_PCIEX16_TOP_BASE + 0x00024000)
+#define NBL_LB_PCIEX16_TOP_AHB (NBL_LB_PCIEX16_TOP_BASE + 0x00000020)
+
+#define NBL_SRIOV_CAPS_OFFSET (0x140)
+
+/* -------- MAILBOX BAR2 ----- */
+#define NBL_MAILBOX_NOTIFY_ADDR (0x00000000)
+#define NBL_MAILBOX_BAR_REG (0x00000000)
+#define NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR (0x10)
+#define NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR (0x20)
+#define NBL_MAILBOX_QINFO_CFG_DBG_TABLE_ADDR (0x30)
+
+/* -------- MAILBOX -------- */
+
+/* mailbox BAR qinfo_cfg_dbg_table */
+struct nbl_mailbox_qinfo_cfg_dbg_tbl {
+ u16 rx_drop;
+ u16 rx_get;
+ u16 tx_drop;
+ u16 tx_out;
+ u16 rx_hd_ptr;
+ u16 tx_hd_ptr;
+ u16 rx_tail_ptr;
+ u16 tx_tail_ptr;
+};
+
+/* mailbox BAR qinfo_cfg_table */
+struct nbl_mailbox_qinfo_cfg_table {
+ u32 queue_base_addr_l;
+ u32 queue_base_addr_h;
+ u32 queue_size_bwind:4;
+ u32 rsv1:28;
+ u32 queue_rst:1;
+ u32 queue_en:1;
+ u32 dif_err:1;
+ u32 ptr_err:1;
+ u32 rsv2:28;
+};
+
+/* -------- MAILBOX BAR0 ----- */
+/* mailbox qinfo_map_table */
+#define NBL_MAILBOX_QINFO_MAP_REG_ARR(func_id) \
+ (NBL_INTF_HOST_MAILBOX_BASE + 0x00001000 + \
+ (func_id) * sizeof(struct nbl_mailbox_qinfo_map_table))
+
+/* MAILBOX qinfo_map_table */
+struct nbl_mailbox_qinfo_map_table {
+ u32 function:3;
+ u32 devid:5;
+ u32 bus:8;
+ u32 msix_idx:13;
+ u32 msix_idx_valid:1;
+ u32 rsv:2;
+};
+
+/* -------- HOST_PCIE -------- */
+#define NBL_PCIE_HOST_K_PF_MASK_REG (NBL_INTF_HOST_PCIE_BASE + 0x00001004)
+#define NBL_PCIE_HOST_K_PF_FID(pf_id) \
+ (NBL_INTF_HOST_PCIE_BASE + 0x0000106C + 4 * (pf_id))
+#define NBL_PCIE_HOST_TL_CFG_BUSDEV (NBL_INTF_HOST_PCIE_BASE + 0x11040)
+
+/* -------- HOST_PADPT -------- */
+#define NBL_HOST_PADPT_HOST_CFG_FC_PD_DN (NBL_INTF_HOST_PADPT_BASE + 0x00000160)
+#define NBL_HOST_PADPT_HOST_CFG_FC_PH_DN (NBL_INTF_HOST_PADPT_BASE + 0x00000164)
+#define NBL_HOST_PADPT_HOST_CFG_FC_NPH_DN \
+ (NBL_INTF_HOST_PADPT_BASE + 0x0000016C)
+#define NBL_HOST_PADPT_HOST_CFG_FC_CPLH_UP \
+ (NBL_INTF_HOST_PADPT_BASE + 0x00000170)
+/* host_padpt host_msix_info */
+#define NBL_PADPT_ABNORMAL_MSIX_VEC (NBL_INTF_HOST_PADPT_BASE + 0x00000200)
+#define NBL_PADPT_ABNORMAL_TIMEOUT (NBL_INTF_HOST_PADPT_BASE + 0x00000204)
+#define NBL_PADPT_HOST_MSIX_INFO_REG_ARR(vector_id) \
+ (NBL_INTF_HOST_PADPT_BASE + 0x00010000 + \
+ (vector_id) * sizeof(struct nbl_host_msix_info))
+/* host_padpt host_vnet_qinfo */
+#define NBL_PADPT_HOST_VNET_QINFO_REG_ARR(queue_id) \
+ (NBL_INTF_HOST_PADPT_BASE + 0x00008000 + \
+ (queue_id) * sizeof(struct nbl_host_vnet_qinfo))
+
+struct nbl_host_msix_info {
+ u32 intrl_pnum:16;
+ u32 intrl_rate:16;
+ u32 function:3;
+ u32 devid:5;
+ u32 bus:8;
+ u32 valid:1;
+ u32 msix_mask_en:1;
+ u32 rsv:14;
+};
+
+/* host_padpt host_vnet_qinfo */
+struct nbl_host_vnet_qinfo {
+ u32 function_id:3;
+ u32 device_id:5;
+ u32 bus_id:8;
+ u32 msix_idx:13;
+ u32 msix_idx_valid:1;
+ u32 log_en:1;
+ u32 valid:1;
+ u32 tph_en:1;
+ u32 ido_en:1;
+ u32 rlo_en:1;
+ u32 rsv0:29;
+};
+
+struct nbl_msix_notify {
+ u32 glb_msix_idx:13;
+ u32 rsv1:3;
+ u32 mask:1;
+ u32 rsv2:15;
+};
+
+/* -------- HOST_PCOMPLETER -------- */
+/* pcompleter_host pcompleter_host_virtio_qid_map_table */
+#define NBL_PCOMPLETER_QID_MAP_REG_ARR(select, i) \
+ (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00010000 + \
+ (select) * NBL_QID_MAP_TABLE_ENTRIES * \
+ sizeof(struct nbl_virtio_qid_map_table) + \
+ (i) * sizeof(struct nbl_virtio_qid_map_table))
+#define NBL_PCOMPLETER_FUNCTION_MSIX_MAP_REG_ARR(i) \
+ (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00004000 + \
+ (i) * sizeof(struct nbl_function_msix_map))
+#define NBL_PCOMPLETER_HOST_MSIX_FID_TABLE(i) \
+ (NBL_INTF_HOST_PCOMPLETER_BASE + 0x0003a000 + \
+ (i) * sizeof(struct nbl_pcompleter_host_msix_fid_table))
+#define NBL_PCOMPLETER_INT_STATUS (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00000000)
+#define NBL_PCOMPLETER_TLP_OUT_DROP_CNT \
+ (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00002430)
+
+/* pcompleter_host pcompleter_host_virtio_table_ready */
+#define NBL_PCOMPLETER_QUEUE_TABLE_READY_REG \
+ (NBL_INTF_HOST_PCOMPLETER_BASE + 0x0000110C)
+/* pcompleter_host pcompleter_host_virtio_table_select */
+#define NBL_PCOMPLETER_QUEUE_TABLE_SELECT_REG \
+ (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00001110)
+
+#define NBL_PCOMPLETER_MSIX_NOTIRY_OFFSET (0x1020)
+
+#define NBL_REG_WRITE_MAX_TRY_TIMES 2
+
+/* pcompleter_host virtio_qid_map_table */
+struct nbl_virtio_qid_map_table {
+ u32 local_qid:9;
+ u32 notify_addr_l:23;
+ u32 notify_addr_h;
+ u32 global_qid:12;
+ u32 ctrlq_flag:1;
+ u32 rsv1:19;
+ u32 rsv2;
+};
+
+struct nbl_pcompleter_host_msix_fid_table {
+ u32 fid:10;
+ u32 vld:1;
+ u32 rsv:21;
+};
+
+struct nbl_function_msix_map {
+ u64 msix_map_base_addr;
+ u32 function:3;
+ u32 devid:5;
+ u32 bus:8;
+ u32 valid:1;
+ u32 rsv0:15;
+ u32 rsv1;
+};
+
+struct nbl_queue_table_select {
+ u32 select:1;
+ u32 rsv:31;
+};
+
+struct nbl_queue_table_ready {
+ u32 ready:1;
+ u32 rsv:31;
+};
+
+/* IPRO ipro_queue_tbl */
+struct nbl_ipro_queue_tbl {
+ u32 vsi_id:10;
+ u32 vsi_en:1;
+ u32 rsv:21;
+};
+
+/* ---------- DPED ---------- */
+#define NBL_DPED_VLAN_OFFSET (NBL_DP_DPED_BASE + 0x000003F4)
+#define NBL_DPED_DSCP_OFFSET_0 (NBL_DP_DPED_BASE + 0x000003F8)
+#define NBL_DPED_DSCP_OFFSET_1 (NBL_DP_DPED_BASE + 0x000003FC)
+
+/* DPED dped_hw_edt_prof */
+#define NBL_DPED_HW_EDT_PROF_TABLE(i) \
+ (NBL_DP_DPED_BASE + 0x00001000 + \
+ (i) * sizeof(struct ped_hw_edit_profile))
+/* DPED dped_l4_ck_cmd_40 */
+
+/* DPED hw_edt_prof/ UPED hw_edt_prof */
+struct ped_hw_edit_profile {
+ u32 l4_len:2;
+#define NBL_PED_L4_LEN_MDY_CMD_0 (0)
+#define NBL_PED_L4_LEN_MDY_CMD_1 (1)
+#define NBL_PED_L4_LEN_MDY_DISABLE (2)
+ u32 l3_len:2;
+#define NBL_PED_L3_LEN_MDY_CMD_0 (0)
+#define NBL_PED_L3_LEN_MDY_CMD_1 (1)
+#define NBL_PED_L3_LEN_MDY_DISABLE (2)
+ u32 l4_ck:3;
+#define NBL_PED_L4_CKSUM_CMD_0 (0)
+#define NBL_PED_L4_CKSUM_CMD_1 (1)
+#define NBL_PED_L4_CKSUM_CMD_2 (2)
+#define NBL_PED_L4_CKSUM_CMD_3 (3)
+#define NBL_PED_L4_CKSUM_CMD_4 (4)
+#define NBL_PED_L4_CKSUM_CMD_5 (5)
+#define NBL_PED_L4_CKSUM_CMD_6 (6)
+#define NBL_PED_L4_CKSUM_DISABLE (7)
+ u32 l3_ck:1;
+#define NBL_PED_L3_CKSUM_ENABLE (1)
+#define NBL_PED_L3_CKSUM_DISABLE (0)
+ u32 l4_ck_zero_free:1;
+#define NBL_PED_L4_CKSUM_ZERO_FREE_ENABLE (1)
+#define NBL_PED_L4_CKSUM_ZERO_FREE_DISABLE (0)
+ u32 rsv:23;
+};
+
+/* ---------- UPED ---------- */
+/* UPED uped_hw_edt_prof */
+#define NBL_UPED_HW_EDT_PROF_TABLE(i) \
+ (NBL_DP_UPED_BASE + 0x00001000 + \
+ (i) * sizeof(struct ped_hw_edit_profile))
+
+/* --------- SHAPING --------- */
+#define NBL_SHAPING_NET_TIMMING_ADD_ADDR (NBL_DP_SHAPING_BASE + 0x00000300)
+#define NBL_SHAPING_NET(i) \
+ (NBL_DP_SHAPING_BASE + 0x00001800 + \
+ (i) * sizeof(struct nbl_shaping_net))
+
+/* cir 1, bandwidth 1kB/s in protol environment */
+/* cir 1, bandwidth 1Mb/s */
+#define NBL_LR_LEONIS_SYS_CLK 15000.0 /* 0105tag Khz */
+#define NBL_LR_LEONIS_NET_SHAPING_CYCLE_MAX 25
+#define NBL_LR_LEONIS_NET_SHAPING_DPETH 600
+#define NBL_LR_LEONIS_NET_BUCKET_DEPTH 9600
+
+#define NBL_SHAPING_DPORT_25G_RATE 0x61A8
+#define NBL_SHAPING_DPORT_HALF_25G_RATE 0x30D4
+
+#define NBL_SHAPING_DPORT_100G_RATE 0x1A400
+#define NBL_SHAPING_DPORT_HALF_100G_RATE 0xD200
+
+#define NBL_UCAR_MAX_BUCKET_DEPTH 524287
+
+#define NBL_DSTORE_DROP_XOFF_TH 0xC8
+#define NBL_DSTORE_DROP_XON_TH 0x64
+
+#define NBL_DSTORE_DROP_XOFF_TH_100G 0x1F4
+#define NBL_DSTORE_DROP_XON_TH_100G 0x12C
+
+#define NBL_DSTORE_DROP_XOFF_TH_BOND_MAIN 0x180
+#define NBL_DSTORE_DROP_XON_TH_BOND_MAIN 0x180
+
+#define NBL_DSTORE_DROP_XOFF_TH_BOND_OTHER 0x64
+#define NBL_DSTORE_DROP_XON_TH_BOND_OTHER 0x64
+
+#define NBL_DSTORE_DROP_XOFF_TH_100G_BOND_MAIN 0x2D5
+#define NBL_DSTORE_DROP_XON_TH_100G_BOND_MAIN 0x2BC
+
+#define NBL_DSTORE_DROP_XOFF_TH_100G_BOND_OTHER 0x145
+#define NBL_DSTORE_DROP_XON_TH_100G_BOND_OTHER 0x12C
+
+#define NBL_DSTORE_DISC_BP_TH (NBL_DP_DSTORE_BASE + 0x00000630)
+
+struct dstore_disc_bp_th {
+ u32 xoff_th:10;
+ u32 rsv1:6;
+ u32 xon_th:10;
+ u32 rsv:5;
+ u32 en:1;
+};
+
+/* DSCH dsch_vn_sha2net_map_tbl */
+struct dsch_vn_sha2net_map_tbl {
+ u32 vld:1;
+ u32 reserve:31;
+};
+
+/* DSCH dsch_vn_net2sha_map_tbl */
+struct dsch_vn_net2sha_map_tbl {
+ u32 vld:1;
+ u32 reserve:31;
+};
+
+struct dsch_psha_en {
+ u32 en:4;
+ u32 rsv:28;
+};
+
+/* SHAPING shaping_net */
+struct nbl_shaping_net {
+ u32 valid:1;
+ u32 depth:19;
+ u32 cir:19;
+ u32 pir:19;
+ u32 cbs:21;
+ u32 pbs:21;
+ u32 rsv:28;
+};
+
+struct nbl_shaping_dport {
+ u32 valid:1;
+ u32 depth:19;
+ u32 cir:19;
+ u32 pir:19;
+ u32 cbs:21;
+ u32 pbs:21;
+ u32 rsv:28;
+};
+
+struct nbl_shaping_dvn_dport {
+ u32 valid:1;
+ u32 depth:19;
+ u32 cir:19;
+ u32 pir:19;
+ u32 cbs:21;
+ u32 pbs:21;
+ u32 rsv:28;
+};
+
+/* ---------- DSCH ---------- */
+/* DSCH vn_host_qid_max */
+#define NBL_DSCH_NOTIFY_BITMAP_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00003000 + (i) * BYTES_PER_DWORD)
+#define NBL_DSCH_FLY_BITMAP_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00004000 + (i) * BYTES_PER_DWORD)
+#define NBL_DSCH_PORT_MAP_REG_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00005000 + (i) * sizeof(struct nbl_port_map))
+/* DSCH dsch_vn_q2tc_cfg_tbl */
+#define NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00010000 + \
+ (i) * sizeof(struct dsch_vn_q2tc_cfg_tbl))
+/* DSCH dsch_vn_n2g_cfg_tbl */
+#define NBL_DSCH_VN_N2G_CFG_TABLE_REG_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00060000 + \
+ (i) * sizeof(struct dsch_vn_n2g_cfg_tbl))
+/* DSCH dsch_vn_g2p_cfg_tbl */
+#define NBL_DSCH_VN_G2P_CFG_TABLE_REG_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00064000 + \
+ (i) * sizeof(struct dsch_vn_g2p_cfg_tbl))
+/* DSCH dsch_vn_sha2net_map_tbl */
+#define NBL_DSCH_VN_SHA2NET_MAP_TABLE_REG_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00070000 + \
+ (i) * sizeof(struct dsch_vn_sha2net_map_tbl))
+/* DSCH dsch_vn_net2sha_map_tbl */
+#define NBL_DSCH_VN_NET2SHA_MAP_TABLE_REG_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00074000 + \
+ (i) * sizeof(struct dsch_vn_net2sha_map_tbl))
+/* DSCH dsch_vn_tc_q_list_tbl */
+#define NBL_DSCH_VN_TC_Q_LIST_TABLE_REG_ARR(i) \
+ (NBL_DP_DSCH_BASE + 0x00040000 + \
+ (i) * sizeof(struct dsch_vn_tc_q_list_tbl))
+/* DSCH dsch maxqid */
+#define NBL_DSCH_HOST_QID_MAX (NBL_DP_DSCH_BASE + 0x00000118)
+#define NBL_DSCH_VN_QUANTA_ADDR (NBL_DP_DSCH_BASE + 0x00000134)
+#define NBL_DSCH_INT_STATUS (NBL_DP_DSCH_BASE + 0x00000000)
+#define NBL_DSCH_RDMA_OTHER_ABN (NBL_DP_DSCH_BASE + 0x00000080)
+#define NBL_DSCH_RDMA_OTHER_ABN_BIT (0x4000)
+#define NBL_DSCH_RDMA_DPQM_DB_LOST (2)
+
+#define NBL_MAX_QUEUE_ID (0x7ff)
+#define NBL_HOST_QUANTA (0x8000)
+#define NBL_ECPU_QUANTA (0x1000)
+
+/* DSCH dsch_vn_q2tc_cfg_tbl */
+struct dsch_vn_q2tc_cfg_tbl {
+ u32 tcid:13;
+ u32 rsv:18;
+ u32 vld:1;
+};
+
+/* DSCH dsch_vn_n2g_cfg_tbl */
+struct dsch_vn_n2g_cfg_tbl {
+ u32 grpid:8;
+ u32 rsv:23;
+ u32 vld:1;
+};
+
+/* DSCH dsch_vn_tc_qlist_tbl */
+struct dsch_vn_tc_q_list_tbl {
+ u32 nxt:11;
+ u32 reserve:18;
+ u32 regi:1;
+ u32 fly:1;
+ u32 vld:1;
+};
+
+/* DSCH dsch_vn_g2p_cfg_tbl */
+struct dsch_vn_g2p_cfg_tbl {
+ u32 port:3;
+ u32 rsv:28;
+ u32 vld:1;
+};
+
+struct dsch_vn_quanta {
+ u32 h_qua:16;
+ u32 e_qua:16;
+};
+
+/* ---------- DVN ---------- */
+/* DVN dvn_queue_table */
+#define NBL_DVN_QUEUE_TABLE_ARR(i) \
+ (NBL_DP_DVN_BASE + 0x00020000 + (i) * sizeof(struct dvn_queue_table))
+#define NBL_DVN_QUEUE_CXT_TABLE_ARR(i) \
+ (NBL_DP_DVN_BASE + 0x00030000 + (i) * sizeof(struct dvn_queue_context))
+/* DVN dvn_queue_reset */
+#define NBL_DVN_QUEUE_RESET_REG (NBL_DP_DVN_BASE + 0x00000400)
+/* DVN dvn_queue_reset_done */
+#define NBL_DVN_QUEUE_RESET_DONE_REG (NBL_DP_DVN_BASE + 0x00000404)
+#define NBL_DVN_ECPU_QUEUE_NUM (NBL_DP_DVN_BASE + 0x0000041C)
+#define NBL_DVN_DESCREQ_NUM_CFG (NBL_DP_DVN_BASE + 0x00000430)
+#define NBL_DVN_DESC_WR_MERGE_TIMEOUT (NBL_DP_DVN_BASE + 0x00000480)
+#define NBL_DVN_DIF_REQ_RD_RO_FLAG (NBL_DP_DVN_BASE + 0x0000045C)
+#define NBL_DVN_INT_STATUS (NBL_DP_DVN_BASE + 0x00000000)
+#define NBL_DVN_DESC_DIF_ERR_CNT (NBL_DP_DVN_BASE + 0x0000003C)
+#define NBL_DVN_DESC_DIF_ERR_INFO (NBL_DP_DVN_BASE + 0x00000038)
+#define NBL_DVN_PKT_DIF_ERR_INFO (NBL_DP_DVN_BASE + 0x00000030)
+#define NBL_DVN_PKT_DIF_ERR_CNT (NBL_DP_DVN_BASE + 0x00000034)
+#define NBL_DVN_ERR_QUEUE_ID_GET (NBL_DP_DVN_BASE + 0x0000040C)
+#define NBL_DVN_BACK_PRESSURE_MASK (NBL_DP_DVN_BASE + 0x00000464)
+#define NBL_DVN_DESCRD_L2_UNAVAIL_CNT (NBL_DP_DVN_BASE + 0x00000A1C)
+#define NBL_DVN_DESCRD_L2_NOAVAIL_CNT (NBL_DP_DVN_BASE + 0x00000A20)
+
+#define DEFAULT_DVN_DESCREQ_NUMCFG (0x00080014)
+#define DEFAULT_DVN_100G_DESCREQ_NUMCFG (0x00080020)
+
+#define NBL_DVN_INT_PKT_DIF_ERR (4)
+#define DEFAULT_DVN_DESC_WR_MERGE_TIMEOUT_MAX (0x3FF)
+
+#define NBL_DVN_INT_DESC_DIF_ERR (5)
+
+struct nbl_dvn_descreq_num_cfg {
+ u32 avring_cfg_num:1; /* spilit ring descreq_num 0:8,1:16 */
+ u32 rsv0:3;
+ /* packet ring descreq_num 0:8,1:12,2:16;3:20,4:24,5:26;6:32,7:32 */
+ u32 packed_l1_num:3;
+ u32 rsv1:25;
+};
+
+struct nbl_dvn_desc_wr_merge_timeout {
+ u32 cfg_cycle:10;
+ u32 rsv:22;
+};
+
+struct nbl_dvn_dif_req_rd_ro_flag {
+ u32 rd_desc_ro_en:1;
+ u32 rd_data_ro_en:1;
+ u32 rd_avring_ro_en:1;
+ u32 rsv:29;
+};
+
+/* DVN dvn_queue_table */
+struct dvn_queue_table {
+ u64 dvn_used_baddr;
+ u64 dvn_avail_baddr;
+ u64 dvn_queue_baddr;
+ u32 dvn_queue_size:4;
+ u32 dvn_queue_type:1;
+ u32 dvn_queue_en:1;
+ u32 dvn_extend_header_en:1;
+ u32 dvn_interleave_seg_disable:1;
+ u32 dvn_seg_disable:1;
+ u32 rsv0:23;
+ u32 rsv1:32;
+};
+
+/* DVN dvn_queue_context */
+struct dvn_queue_context {
+ u32 dvn_descrd_num:3;
+ u32 dvn_firstdescid:16;
+ u32 dvn_firstdesc:16;
+ u32 dvn_indirect_len:6;
+ u64 dvn_indirect_addr:64;
+ u32 dvn_indirect_next:5;
+ u32 dvn_l1_ring_read:16;
+ u32 dvn_avail_ring_read:16;
+ u32 dvn_ring_wrap_counter:1;
+ u32 dvn_lso_id:10;
+ u32 dvn_avail_ring_idx:16;
+ u32 dvn_used_ring_idx:16;
+ u32 dvn_indirect_left:1;
+ u32 dvn_desc_left:1;
+ u32 dvn_lso_flag:1;
+ u32 dvn_descrd_disable:1;
+ u32 dvn_queue_err:1;
+ u32 dvn_lso_drop:1;
+ u32 dvn_protected_bit:1;
+ u64 reserve;
+};
+
+/* DVN dvn_queue_reset */
+struct nbl_dvn_queue_reset {
+ u32 dvn_queue_index:11;
+ u32 vld:1;
+ u32 rsv:20;
+};
+
+/* DVN dvn_queue_reset_done */
+struct nbl_dvn_queue_reset_done {
+ u32 flag:1;
+ u32 rsv:31;
+};
+
+/* ---------- UVN ---------- */
+/* UVN uvn_queue_table */
+#define NBL_UVN_QUEUE_TABLE_ARR(i) \
+ (NBL_DP_UVN_BASE + 0x00010000 + (i) * sizeof(struct uvn_queue_table))
+/* UVN uvn_queue_cxt */
+#define NBL_UVN_QUEUE_CXT_TABLE_ARR(i) \
+ (NBL_DP_UVN_BASE + 0x00020000 + (i) * sizeof(struct uvn_queue_cxt))
+/* UVN uvn_desc_cxt */
+#define NBL_UVN_DESC_CXT_TABLE_ARR(i) \
+ (NBL_DP_UVN_BASE + 0x00028000 + (i) * sizeof(struct uvn_desc_cxt))
+/* UVN uvn_queue_reset */
+#define NBL_UVN_QUEUE_RESET_REG (NBL_DP_UVN_BASE + 0x00000200)
+/* UVN uvn_queue_reset_done */
+#define NBL_UVN_QUEUE_RESET_DONE_REG (NBL_DP_UVN_BASE + 0x00000408)
+#define NBL_UVN_STATIS_PKT_DROP(i) \
+ (NBL_DP_UVN_BASE + 0x00038000 + (i) * sizeof(u32))
+#define NBL_UVN_INT_STATUS (NBL_DP_UVN_BASE + 0x00000000)
+#define NBL_UVN_QUEUE_ERR_INFO (NBL_DP_UVN_BASE + 0x00000034)
+#define NBL_UVN_QUEUE_ERR_CNT (NBL_DP_UVN_BASE + 0x00000038)
+#define NBL_UVN_DESC_RD_WAIT (NBL_DP_UVN_BASE + 0x0000020C)
+#define NBL_UVN_QUEUE_ERR_MASK (NBL_DP_UVN_BASE + 0x00000224)
+#define NBL_UVN_ECPU_QUEUE_NUM (NBL_DP_UVN_BASE + 0x0000023C)
+#define NBL_UVN_DESC_WR_TIMEOUT (NBL_DP_UVN_BASE + 0x00000214)
+#define NBL_UVN_DIF_DELAY_REQ (NBL_DP_UVN_BASE + 0x000010D0)
+#define NBL_UVN_DIF_DELAY_TIME (NBL_DP_UVN_BASE + 0x000010D4)
+#define NBL_UVN_DIF_DELAY_MAX (NBL_DP_UVN_BASE + 0x000010D8)
+#define NBL_UVN_DESC_PRE_DESC_REQ_NULL (NBL_DP_UVN_BASE + 0x000012C8)
+#define NBL_UVN_DESC_PRE_DESC_REQ_LACK (NBL_DP_UVN_BASE + 0x000012CC)
+#define NBL_UVN_DESC_RD_ENTRY (NBL_DP_UVN_BASE + 0x000012D0)
+#define NBL_UVN_DESC_RD_DROP_DESC_LACK (NBL_DP_UVN_BASE + 0x000012E0)
+#define NBL_UVN_DIF_REQ_RO_FLAG (NBL_DP_UVN_BASE + 0x00000250)
+#define NBL_UVN_DESC_PREFETCH_INIT (NBL_DP_UVN_BASE + 0x00000204)
+#define NBL_UVN_DESC_WR_TIMEOUT_4US (0x960)
+#define NBL_UVN_DESC_PREFETCH_NUM (4)
+
+#define NBL_UVN_INT_QUEUE_ERR (5)
+
+struct uvn_dif_req_ro_flag {
+ u32 avail_rd:1;
+ u32 desc_rd:1;
+ u32 pkt_wr:1;
+ u32 desc_wr:1;
+ u32 rsv:28;
+};
+
+/* UVN uvn_queue_table */
+struct uvn_queue_table {
+ u64 used_baddr;
+ u64 avail_baddr;
+ u64 queue_baddr;
+ u32 queue_size_mask_pow:4;
+ u32 queue_type:1;
+ u32 queue_enable:1;
+ u32 extend_header_en:1;
+ u32 guest_csum_en:1;
+ u32 half_offload_en:1;
+ u32 rsv0:23;
+ u32 rsv1:32;
+};
+
+/* uvn uvn_queue_cxt */
+struct uvn_queue_cxt {
+ u32 queue_head:16;
+ u32 wrap_count:1;
+ u32 queue_err:1;
+ u32 prefetch_null_cnt:2;
+ u32 ntf_finish:1;
+ u32 spnd_flag:1;
+ u32 reserve0:10;
+ u32 avail_idx:16;
+ u32 avail_idx_spnd_flag:1;
+ u32 reserve1:15;
+ u32 reserve2[2];
+};
+
+/* uvn uvn_queue_reset */
+struct nbl_uvn_queue_reset {
+ u32 index:11;
+ u32 rsv0:5;
+ u32 vld:1;
+ u32 rsv1:15;
+};
+
+/* uvn uvn_queue_reset_done */
+struct nbl_uvn_queue_reset_done {
+ u32 flag:1;
+ u32 rsv:31;
+};
+
+/* uvn uvn_desc_cxt */
+struct uvn_desc_cxt {
+ u32 cache_head:9;
+ u32 reserve0:7;
+ u32 cache_tail:9;
+ u32 reserve1:7;
+ u32 cache_pref_num_prev:9;
+ u32 reserve2:7;
+ u32 cache_pref_num_post:9;
+ u32 reserve3:7;
+ u32 cache_head_byte:30;
+ u32 reserve4:2;
+ u32 cache_tail_byte:30;
+ u32 reserve5:2;
+};
+
+struct uvn_desc_wr_timeout {
+ u32 num:15;
+ u32 mask:1;
+ u32 rsv:16;
+};
+
+struct uvn_queue_err_mask {
+ u32 rsv0:1;
+ u32 buffer_len_err:1;
+ u32 next_err:1;
+ u32 indirect_err:1;
+ u32 split_err:1;
+ u32 dif_err:1;
+ u32 rsv1:26;
+};
+
+struct uvn_desc_prefetch_init {
+ u32 num:8;
+ u32 rsv1:8;
+ u32 sel:1;
+ u32 rsv:15;
+};
+
+/* -------- USTORE -------- */
+#define NBL_USTORE_PKT_LEN_ADDR (NBL_DP_USTORE_BASE + 0x00000108)
+#define NBL_USTORE_PORT_FC_TH_REG_ARR(port_id) \
+ (NBL_DP_USTORE_BASE + 0x00000134 + \
+ (port_id) * sizeof(struct nbl_ustore_port_fc_th))
+#define NBL_USTORE_COS_FC_TH_REG_ARR(cos_id) \
+ (NBL_DP_USTORE_BASE + 0x00000200 + \
+ (cos_id) * sizeof(struct nbl_ustore_cos_fc_th))
+#define NBL_USTORE_PORT_DROP_TH_REG_ARR(port_id) \
+ (NBL_DP_USTORE_BASE + 0x00000150 + \
+ (port_id) * sizeof(struct nbl_ustore_port_drop_th))
+#define NBL_USTORE_BUF_TOTAL_DROP_PKT (NBL_DP_USTORE_BASE + 0x000010A8)
+#define NBL_USTORE_BUF_TOTAL_TRUN_PKT (NBL_DP_USTORE_BASE + 0x000010AC)
+#define NBL_USTORE_BUF_PORT_DROP_PKT(eth_id) \
+ (NBL_DP_USTORE_BASE + 0x00002500 + (eth_id) * sizeof(u32))
+#define NBL_USTORE_BUF_PORT_TRUN_PKT(eth_id) \
+ (NBL_DP_USTORE_BASE + 0x00002540 + (eth_id) * sizeof(u32))
+
+#define NBL_USTORE_SIGNLE_ETH_DROP_TH 0xC80
+#define NBL_USTORE_DUAL_ETH_DROP_TH 0x640
+#define NBL_USTORE_QUAD_ETH_DROP_TH 0x320
+
+/* USTORE pkt_len */
+struct ustore_pkt_len {
+ u32 min:7;
+ u32 rsv:8;
+ u32 min_chk_en:1;
+ u32 max:14;
+ u32 rsv2:1;
+ u32 max_chk_len:1;
+};
+
+/* USTORE port_fc_th */
+struct nbl_ustore_port_fc_th {
+ u32 xoff_th:12;
+ u32 rsv1:4;
+ u32 xon_th:12;
+ u32 rsv2:2;
+ u32 fc_set:1;
+ u32 fc_en:1;
+};
+
+/* USTORE cos_fc_th */
+struct nbl_ustore_cos_fc_th {
+ u32 xoff_th:12;
+ u32 rsv1:4;
+ u32 xon_th:12;
+ u32 rsv2:2;
+ u32 fc_set:1;
+ u32 fc_en:1;
+};
+
+#define NBL_MAX_USTORE_COS_FC_TH (4080)
+
+/* USTORE port_drop_th */
+struct nbl_ustore_port_drop_th {
+ u32 disc_th:12;
+ u32 rsv:19;
+ u32 en:1;
+};
+
+/* ---------- IPRO ---------- */
+/* ipro module related macros */
+#define NBL_IPRO_MODULE (0xB04000)
+/* ipro queue tbl */
+#define NBL_IPRO_QUEUE_TBL(i) \
+ (NBL_IPRO_MODULE + 0x00004000 + (i) * sizeof(struct nbl_ipro_queue_tbl))
+#define NBL_IPRO_UP_SPORT_TABLE(i) \
+ (NBL_IPRO_MODULE + 0x00007000 + \
+ (i) * sizeof(struct nbl_ipro_upsport_tbl))
+#define NBL_IPRO_DN_SRC_PORT_TABLE(i) \
+ (NBL_PPE_IPRO_BASE + 0x00008000 + \
+ (i) * sizeof(struct nbl_ipro_dn_src_port_tbl))
+
+enum nbl_fwd_type_e {
+ NBL_FWD_TYPE_NORMAL = 0,
+ NBL_FWD_TYPE_CPU_ASSIGNED = 1,
+ NBL_FWD_TYPE_UPCALL = 2,
+ NBL_FWD_TYPE_SRC_MIRROR = 3,
+ NBL_FWD_TYPE_OTHER_MIRROR = 4,
+ NBL_FWD_TYPE_MNG = 5,
+ NBL_FWD_TYPE_GLB_LB = 6,
+ NBL_FWD_TYPE_DROP = 7,
+ NBL_FWD_TYPE_MAX = 8,
+};
+
+/* IPRO dn_src_port_tbl */
+struct nbl_ipro_dn_src_port_tbl {
+ u32 entry_vld:1;
+ u32 mirror_en:1;
+ u32 mirror_pr:2;
+ u32 mirror_id:4;
+ u32 vlan_layer_num_1:2;
+ u32 hw_flow:1;
+ u32 mtu_sel:4;
+ u32 addr_check_en:1;
+ u32 smac_low:16;
+ u32 smac_high;
+ u32 dqueue:11;
+ u32 dqueue_en:1;
+ u32 dqueue_pri:2;
+ u32 set_dport_pri:2;
+ union nbl_action_data set_dport;
+ u32 set_dport_en:1;
+ u32 proc_done:1;
+ u32 not_used_1:6;
+ u32 rsv:24;
+};
+
+/* IPRO up sport tab */
+struct nbl_ipro_upsport_tbl {
+ u32 entry_vld:1;
+ u32 vlan_layer_num_0:2;
+ u32 vlan_layer_num_1:2;
+ u32 lag_vld:1;
+ u32 lag_id:2;
+ u32 hw_flow:1;
+ u32 mirror_en:1;
+ u32 mirror_pr:2;
+ u32 mirror_id:4;
+ u32 dqueue_pri:2;
+ u32 set_dport_pri:2;
+ u32 dqueue:11;
+ u32 dqueue_en:1;
+ union nbl_action_data set_dport;
+ u32 set_dport_en:1;
+ u32 proc_done:1;
+ u32 car_en:1;
+ u32 car_pr:2;
+ u32 car_id:10;
+ u32 rsv:1;
+};
+
+/* ---------- EPRO ---------- */
+#define NBL_EPRO_INT_STATUS (NBL_PPE_EPRO_BASE + 0x00000000)
+#define NBL_EPRO_INT_MASK (NBL_PPE_EPRO_BASE + 0x00000004)
+#define NBL_EPRO_RSS_KEY_REG (NBL_PPE_EPRO_BASE + 0x00000400)
+
+/* epro epro_ept table */
+#define NBL_EPRO_EPT_TABLE(i) \
+ (NBL_PPE_EPRO_BASE + 0x00001800 + (i) * sizeof(struct nbl_epro_ept_tbl))
+/* epro epro_vpt table */
+#define NBL_EPRO_VPT_TABLE(i) \
+ (NBL_PPE_EPRO_BASE + 0x00004000 + (i) * sizeof(struct nbl_epro_vpt_tbl))
+/* epro epro_rss_pt table */
+#define NBL_EPRO_RSS_PT_TABLE(i) \
+ (NBL_PPE_EPRO_BASE + 0x00002000 + \
+ (i) * sizeof(struct nbl_epro_rss_pt_tbl))
+/* epro epro_rss_ret table */
+#define NBL_EPRO_RSS_RET_TABLE(i) \
+ (NBL_PPE_EPRO_BASE + 0x00008000 + \
+ (i) * sizeof(struct nbl_epro_rss_ret_tbl))
+/* epro epro_sch_cos_map table */
+#define NBL_EPRO_SCH_COS_MAP_TABLE(i, j) \
+ (NBL_PPE_EPRO_BASE + 0x00000640 + ((i) * 0x20) + \
+ (j) * sizeof(struct nbl_epro_cos_map))
+/* epro epro_port_pri_mdf_en */
+#define NBL_EPRO_PORT_PRI_MDF_EN (NBL_PPE_EPRO_BASE + 0x000006E0)
+/* epro epro_act_sel_en */
+#define NBL_EPRO_ACT_SEL_EN_REG (NBL_PPE_EPRO_BASE + 0x00000214)
+/* epro epro_kgen_ft table */
+#define NBL_EPRO_KGEN_FT_TABLE(i) \
+ (NBL_PPE_EPRO_BASE + 0x00001980 + \
+ (i) * sizeof(struct nbl_epro_kgen_ft_tbl))
+
+struct nbl_epro_rss_key {
+ u64 key0;
+ u64 key1;
+ u64 key2;
+ u64 key3;
+ u64 key4;
+};
+
+/* EPRO epro_rss_ret table */
+struct nbl_epro_rss_ret_tbl {
+ u32 dqueue0:11;
+ u32 vld0:1;
+ u32 rsv0:4;
+ u32 dqueue1:11;
+ u32 vld1:1;
+ u32 rsv1:4;
+};
+
+/* EPRO epro_rss_pt table */
+struct nbl_epro_rss_pt_tbl {
+ u32 entry_size:3;
+#define NBL_EPRO_RSS_ENTRY_SIZE_16 (0)
+#define NBL_EPRO_RSS_ENTRY_SIZE_32 (1)
+#define NBL_EPRO_RSS_ENTRY_SIZE_64 (2)
+#define NBL_EPRO_RSS_ENTRY_SIZE_128 (3)
+#define NBL_EPRO_RSS_ENTRY_SIZE_256 (4)
+ u32 offset1:14;
+ u32 offset1_vld:1;
+ u32 offset0:14;
+ u32 offset0_vld:1;
+ u32 vld:1;
+ u32 rsv:30;
+};
+
+/*EPRO sch cos map*/
+struct nbl_epro_cos_map {
+ u32 pkt_cos:3;
+ u32 dscp:6;
+ u32 rsv:23;
+};
+
+/* EPRO epro_port_pri_mdf_en */
+struct nbl_epro_port_pri_mdf_en_cfg {
+ u32 eth0:1;
+ u32 eth1:1;
+ u32 eth2:1;
+ u32 eth3:1;
+ u32 loop:1;
+ u32 rsv:27;
+};
+
+/* EPRO action filter table */
+struct nbl_epro_action_filter_tbl {
+ u64 filter_mask;
+};
+
+/* EPRO epr_ept table */
+struct nbl_epro_ept_tbl {
+ u32 cvlan:16;
+ u32 svlan:16;
+ u32 fwd:1;
+#define NBL_EPRO_FWD_TYPE_DROP (0)
+#define NBL_EPRO_FWD_TYPE_NORMAL (1)
+ u32 mirror_en:1;
+ u32 mirror_id:4;
+ u32 pop_i_vlan:1;
+ u32 pop_o_vlan:1;
+ u32 push_i_vlan:1;
+ u32 push_o_vlan:1;
+ u32 replace_i_vlan:1;
+ u32 replace_o_vlan:1;
+ u32 lag_alg_sel:2;
+#define NBL_EPRO_LAG_ALG_L2_HASH (0)
+#define NBL_EPRO_LAG_ALG_L23_HASH (1)
+#define NBL_EPRO_LAG_ALG_LINUX_L34_HASH (2)
+#define NBL_EPRO_LAG_ALG_DPDK_L34_HASH (3)
+ u32 lag_port_btm:4;
+ u32 lag_l2_protect_en:1;
+ u32 pfc_sch_cos_default:3;
+ u32 pfc_mode:1;
+ u32 vld:1;
+ u32 rsv:8;
+};
+
+/* EPRO epro_vpt table */
+struct nbl_epro_vpt_tbl {
+ u32 cvlan:16;
+ u32 svlan:16;
+ u32 fwd:1;
+#define NBL_EPRO_FWD_TYPE_DROP (0)
+#define NBL_EPRO_FWD_TYPE_NORMAL (1)
+ u32 mirror_en:1;
+ u32 mirror_id:4;
+ u32 car_en:1;
+ u32 car_id:10;
+ u32 pop_i_vlan:1;
+ u32 pop_o_vlan:1;
+ u32 push_i_vlan:1;
+ u32 push_o_vlan:1;
+ u32 replace_i_vlan:1;
+ u32 replace_o_vlan:1;
+ u32 rss_alg_sel:1;
+#define NBL_EPRO_RSS_ALG_TOEPLITZ_HASH (0)
+#define NBL_EPRO_RSS_ALG_CRC32 (1)
+ u32 rss_key_type_ipv4:1;
+#define NBL_EPRO_RSS_KEY_TYPE_IPV4_L3 (0)
+#define NBL_EPRO_RSS_KEY_TYPE_IPV4_L4 (1)
+ u32 rss_key_type_ipv6:1;
+#define NBL_EPRO_RSS_KEY_TYPE_IPV6_L3 (0)
+#define NBL_EPRO_RSS_KEY_TYPE_IPV6_L4 (1)
+ u32 vld:1;
+ u32 rsv:5;
+};
+
+/* UPA upa_pri_sel_conf */
+#define NBL_UPA_PRI_SEL_CONF_TABLE(id) \
+ (NBL_DP_UPA_BASE + 0x00000230 + \
+ ((id) * sizeof(struct nbl_upa_pri_sel_conf)))
+#define NBL_UPA_PRI_CONF_TABLE(id) \
+ (NBL_DP_UPA_BASE + 0x00002000 + \
+ ((id) * sizeof(struct nbl_upa_pri_conf)))
+
+/* UPA pri_sel_conf */
+struct nbl_upa_pri_sel_conf {
+ u32 pri_sel:5;
+ u32 pri_default:3;
+ u32 pri_disen:1;
+ u32 rsv:23;
+};
+
+/* UPA pri_conf_table */
+struct nbl_upa_pri_conf {
+ u32 pri0:4;
+ u32 pri1:4;
+ u32 pri2:4;
+ u32 pri3:4;
+ u32 pri4:4;
+ u32 pri5:4;
+ u32 pri6:4;
+ u32 pri7:4;
+};
+
+#define NBL_DQM_RXMAC_TX_PORT_BP_EN (NBL_DP_DQM_BASE + 0x00000660)
+#define NBL_DQM_RXMAC_TX_COS_BP_EN (NBL_DP_DQM_BASE + 0x00000664)
+#define NBL_DQM_RXMAC_RX_PORT_BP_EN (NBL_DP_DQM_BASE + 0x00000670)
+#define NBL_DQM_RX_PORT_BP_EN (NBL_DP_DQM_BASE + 0x00000610)
+#define NBL_DQM_RX_COS_BP_EN (NBL_DP_DQM_BASE + 0x00000614)
+
+/* DQM rxmac_tx_port_bp_en */
+struct nbl_dqm_rxmac_tx_port_bp_en_cfg {
+ u32 eth0:1;
+ u32 eth1:1;
+ u32 eth2:1;
+ u32 eth3:1;
+ u32 rsv:28;
+};
+
+/* DQM rxmac_tx_cos_bp_en */
+struct nbl_dqm_rxmac_tx_cos_bp_en_cfg {
+ u32 eth0:8;
+ u32 eth1:8;
+ u32 eth2:8;
+ u32 eth3:8;
+};
+
+#define NBL_UQM_QUE_TYPE (NBL_DP_UQM_BASE + 0x0000013c)
+#define NBL_UQM_RX_COS_BP_EN (NBL_DP_UQM_BASE + 0x00000614)
+#define NBL_UQM_TX_COS_BP_EN (NBL_DP_UQM_BASE + 0x00000604)
+
+#define NBL_UQM_DROP_PKT_CNT (NBL_DP_UQM_BASE + 0x000009C0)
+#define NBL_UQM_DROP_PKT_SLICE_CNT (NBL_DP_UQM_BASE + 0x000009C4)
+#define NBL_UQM_DROP_PKT_LEN_ADD_CNT (NBL_DP_UQM_BASE + 0x000009C8)
+#define NBL_UQM_DROP_HEAD_PNTR_ADD_CNT (NBL_DP_UQM_BASE + 0x000009CC)
+#define NBL_UQM_DROP_WEIGHT_ADD_CNT (NBL_DP_UQM_BASE + 0x000009D0)
+#define NBL_UQM_PORT_DROP_PKT_CNT (NBL_DP_UQM_BASE + 0x000009D4)
+#define NBL_UQM_PORT_DROP_PKT_SLICE_CNT (NBL_DP_UQM_BASE + 0x000009F4)
+#define NBL_UQM_PORT_DROP_PKT_LEN_ADD_CNT (NBL_DP_UQM_BASE + 0x00000A14)
+#define NBL_UQM_PORT_DROP_HEAD_PNTR_ADD_CNT (NBL_DP_UQM_BASE + 0x00000A34)
+#define NBL_UQM_PORT_DROP_WEIGHT_ADD_CNT (NBL_DP_UQM_BASE + 0x00000A54)
+#define NBL_UQM_FWD_DROP_CNT (NBL_DP_UQM_BASE + 0x00000A80)
+#define NBL_UQM_DPORT_DROP_CNT (NBL_DP_UQM_BASE + 0x00000B74)
+
+#define NBL_UQM_PORT_DROP_DEPTH 6
+#define NBL_UQM_DPORT_DROP_DEPTH 16
+
+struct nbl_uqm_que_type {
+ u32 bp_drop:1;
+ u32 rsv:31;
+};
+
+/* UQM rx_cos_bp_en */
+struct nbl_uqm_rx_cos_bp_en_cfg {
+ u32 vld_l;
+ u32 vld_h:16;
+};
+
+/* UQM rx_port_bp_en */
+struct nbl_uqm_rx_port_bp_en_cfg {
+ u32 l4s_h:1;
+ u32 l4s_e:1;
+ u32 rdma_h:1;
+ u32 rdma_e:1;
+ u32 emp:1;
+ u32 loopback:1;
+ u32 rsv:26;
+};
+
+/* UQM tx_cos_bp_en */
+struct nbl_uqm_tx_cos_bp_en_cfg {
+ u32 vld_l;
+ u32 vld_h:8;
+};
+
+#pragma pack()
+
+#define NBL_BYTES_IN_REG (4)
+#define NBL_SHAPING_DPORT_ADDR (0x504700)
+#define NBL_SHAPING_DPORT_DWLEN (4)
+#define NBL_SHAPING_DPORT_REG(r) \
+ (NBL_SHAPING_DPORT_ADDR + (NBL_SHAPING_DPORT_DWLEN * 4) * (r))
+#define NBL_SHAPING_DVN_DPORT_ADDR (0x504750)
+#define NBL_SHAPING_DVN_DPORT_DWLEN (4)
+#define NBL_SHAPING_DVN_DPORT_REG(r) \
+ (NBL_SHAPING_DVN_DPORT_ADDR + (NBL_SHAPING_DVN_DPORT_DWLEN * 4) * (r))
+#define NBL_DSCH_PSHA_EN_ADDR (0x404314)
+#define NBL_SHAPING_NET_ADDR (0x505800)
+#define NBL_SHAPING_NET_DWLEN (4)
+#define NBL_SHAPING_NET_REG(r) \
+ (NBL_SHAPING_NET_ADDR + (NBL_SHAPING_NET_DWLEN * 4) * (r))
+#define NBL_FW_BOARD_CONFIG 0x200
+#define NBL_FW_BOARD_DW3_OFFSET (NBL_FW_BOARD_CONFIG + 12)
+#define NBL_FW_BOARD_DW6_OFFSET (NBL_FW_BOARD_CONFIG + 24)
+union nbl_fw_board_cfg_dw3 {
+ struct board_cfg_dw3 {
+ u32 port_type:1;
+ u32 port_num:7;
+ u32 port_speed:2;
+ u32 gpio_type:3;
+ u32 p4_version:1; /* 0: low version; 1: high version */
+ u32 rsv:18;
+ } __packed info;
+ u32 data;
+};
+
+union nbl_fw_board_cfg_dw6 {
+ struct board_cfg_dw6 {
+ u8 lane_bitmap;
+ u8 eth_bitmap;
+ u16 rsv;
+ } __packed info;
+ u32 data;
+};
+
+#define NBL_LEONIS_QUIRKS_OFFSET (0x00000140)
+#define NBL_LEONIS_ILLEGAL_REG_VALUE (0xDEADBEEF)
+
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c
new file mode 100644
index 000000000000..e2795185d5ae
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c
@@ -0,0 +1,3903 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include "nbl_hw_reg.h"
+#include "nbl_hw_leonis.h"
+#include "nbl_hw_leonis_regs.h"
+
+#define NBL_SEC_BLOCK_SIZE (0x100)
+#define NBL_SEC000_SIZE (1)
+#define NBL_SEC000_ADDR (0x114150)
+#define NBL_SEC001_SIZE (1)
+#define NBL_SEC001_ADDR (0x15c190)
+#define NBL_SEC002_SIZE (1)
+#define NBL_SEC002_ADDR (0x10417c)
+#define NBL_SEC003_SIZE (1)
+#define NBL_SEC003_ADDR (0x714154)
+#define NBL_SEC004_SIZE (1)
+#define NBL_SEC004_ADDR (0x75c190)
+#define NBL_SEC005_SIZE (1)
+#define NBL_SEC005_ADDR (0x70417c)
+#define NBL_SEC006_SIZE (512)
+#define NBL_SEC006_ADDR (0x8f000)
+#define NBL_SEC006_REGI(i) (0x8f000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC007_SIZE (256)
+#define NBL_SEC007_ADDR (0x8f800)
+#define NBL_SEC007_REGI(i) (0x8f800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC008_SIZE (1024)
+#define NBL_SEC008_ADDR (0x90000)
+#define NBL_SEC008_REGI(i) (0x90000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC009_SIZE (2048)
+#define NBL_SEC009_ADDR (0x94000)
+#define NBL_SEC009_REGI(i) (0x94000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC010_SIZE (256)
+#define NBL_SEC010_ADDR (0x96000)
+#define NBL_SEC010_REGI(i) (0x96000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC011_SIZE (1024)
+#define NBL_SEC011_ADDR (0x91000)
+#define NBL_SEC011_REGI(i) (0x91000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC012_SIZE (128)
+#define NBL_SEC012_ADDR (0x92000)
+#define NBL_SEC012_REGI(i) (0x92000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC013_SIZE (64)
+#define NBL_SEC013_ADDR (0x92200)
+#define NBL_SEC013_REGI(i) (0x92200 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC014_SIZE (64)
+#define NBL_SEC014_ADDR (0x92300)
+#define NBL_SEC014_REGI(i) (0x92300 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC015_SIZE (1)
+#define NBL_SEC015_ADDR (0x8c214)
+#define NBL_SEC016_SIZE (1)
+#define NBL_SEC016_ADDR (0x8c220)
+#define NBL_SEC017_SIZE (1)
+#define NBL_SEC017_ADDR (0x8c224)
+#define NBL_SEC018_SIZE (1)
+#define NBL_SEC018_ADDR (0x8c228)
+#define NBL_SEC019_SIZE (1)
+#define NBL_SEC019_ADDR (0x8c22c)
+#define NBL_SEC020_SIZE (1)
+#define NBL_SEC020_ADDR (0x8c1f0)
+#define NBL_SEC021_SIZE (1)
+#define NBL_SEC021_ADDR (0x8c1f8)
+#define NBL_SEC022_SIZE (256)
+#define NBL_SEC022_ADDR (0x85f000)
+#define NBL_SEC022_REGI(i) (0x85f000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC023_SIZE (128)
+#define NBL_SEC023_ADDR (0x85f800)
+#define NBL_SEC023_REGI(i) (0x85f800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC024_SIZE (512)
+#define NBL_SEC024_ADDR (0x860000)
+#define NBL_SEC024_REGI(i) (0x860000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC025_SIZE (1024)
+#define NBL_SEC025_ADDR (0x864000)
+#define NBL_SEC025_REGI(i) (0x864000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC026_SIZE (256)
+#define NBL_SEC026_ADDR (0x866000)
+#define NBL_SEC026_REGI(i) (0x866000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC027_SIZE (512)
+#define NBL_SEC027_ADDR (0x861000)
+#define NBL_SEC027_REGI(i) (0x861000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC028_SIZE (64)
+#define NBL_SEC028_ADDR (0x862000)
+#define NBL_SEC028_REGI(i) (0x862000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC029_SIZE (32)
+#define NBL_SEC029_ADDR (0x862200)
+#define NBL_SEC029_REGI(i) (0x862200 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC030_SIZE (32)
+#define NBL_SEC030_ADDR (0x862300)
+#define NBL_SEC030_REGI(i) (0x862300 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC031_SIZE (1)
+#define NBL_SEC031_ADDR (0x85c214)
+#define NBL_SEC032_SIZE (1)
+#define NBL_SEC032_ADDR (0x85c220)
+#define NBL_SEC033_SIZE (1)
+#define NBL_SEC033_ADDR (0x85c224)
+#define NBL_SEC034_SIZE (1)
+#define NBL_SEC034_ADDR (0x85c228)
+#define NBL_SEC035_SIZE (1)
+#define NBL_SEC035_ADDR (0x85c22c)
+#define NBL_SEC036_SIZE (1)
+#define NBL_SEC036_ADDR (0xb04200)
+#define NBL_SEC037_SIZE (1)
+#define NBL_SEC037_ADDR (0xb04230)
+#define NBL_SEC038_SIZE (1)
+#define NBL_SEC038_ADDR (0xb04234)
+#define NBL_SEC039_SIZE (64)
+#define NBL_SEC039_ADDR (0xb05800)
+#define NBL_SEC039_REGI(i) (0xb05800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC040_SIZE (32)
+#define NBL_SEC040_ADDR (0xb05400)
+#define NBL_SEC040_REGI(i) (0xb05400 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC041_SIZE (16)
+#define NBL_SEC041_ADDR (0xb05500)
+#define NBL_SEC041_REGI(i) (0xb05500 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC042_SIZE (1)
+#define NBL_SEC042_ADDR (0xb14148)
+#define NBL_SEC043_SIZE (1)
+#define NBL_SEC043_ADDR (0xb14104)
+#define NBL_SEC044_SIZE (1)
+#define NBL_SEC044_ADDR (0xb1414c)
+#define NBL_SEC045_SIZE (1)
+#define NBL_SEC045_ADDR (0xb14150)
+#define NBL_SEC046_SIZE (256)
+#define NBL_SEC046_ADDR (0xb15000)
+#define NBL_SEC046_REGI(i) (0xb15000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC047_SIZE (32)
+#define NBL_SEC047_ADDR (0xb15800)
+#define NBL_SEC047_REGI(i) (0xb15800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC048_SIZE (1)
+#define NBL_SEC048_ADDR (0xb24148)
+#define NBL_SEC049_SIZE (1)
+#define NBL_SEC049_ADDR (0xb24104)
+#define NBL_SEC050_SIZE (1)
+#define NBL_SEC050_ADDR (0xb2414c)
+#define NBL_SEC051_SIZE (1)
+#define NBL_SEC051_ADDR (0xb24150)
+#define NBL_SEC052_SIZE (256)
+#define NBL_SEC052_ADDR (0xb25000)
+#define NBL_SEC052_REGI(i) (0xb25000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC053_SIZE (32)
+#define NBL_SEC053_ADDR (0xb25800)
+#define NBL_SEC053_REGI(i) (0xb25800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC054_SIZE (1)
+#define NBL_SEC054_ADDR (0xb34148)
+#define NBL_SEC055_SIZE (1)
+#define NBL_SEC055_ADDR (0xb34104)
+#define NBL_SEC056_SIZE (1)
+#define NBL_SEC056_ADDR (0xb3414c)
+#define NBL_SEC057_SIZE (1)
+#define NBL_SEC057_ADDR (0xb34150)
+#define NBL_SEC058_SIZE (256)
+#define NBL_SEC058_ADDR (0xb35000)
+#define NBL_SEC058_REGI(i) (0xb35000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC059_SIZE (32)
+#define NBL_SEC059_ADDR (0xb35800)
+#define NBL_SEC059_REGI(i) (0xb35800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC060_SIZE (1)
+#define NBL_SEC060_ADDR (0xe74630)
+#define NBL_SEC061_SIZE (1)
+#define NBL_SEC061_ADDR (0xe74634)
+#define NBL_SEC062_SIZE (64)
+#define NBL_SEC062_ADDR (0xe75000)
+#define NBL_SEC062_REGI(i) (0xe75000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC063_SIZE (32)
+#define NBL_SEC063_ADDR (0xe75480)
+#define NBL_SEC063_REGI(i) (0xe75480 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC064_SIZE (16)
+#define NBL_SEC064_ADDR (0xe75980)
+#define NBL_SEC064_REGI(i) (0xe75980 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC065_SIZE (32)
+#define NBL_SEC065_ADDR (0x15f000)
+#define NBL_SEC065_REGI(i) (0x15f000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC066_SIZE (32)
+#define NBL_SEC066_ADDR (0x75f000)
+#define NBL_SEC066_REGI(i) (0x75f000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC067_SIZE (1)
+#define NBL_SEC067_ADDR (0xb64108)
+#define NBL_SEC068_SIZE (1)
+#define NBL_SEC068_ADDR (0xb6410c)
+#define NBL_SEC069_SIZE (1)
+#define NBL_SEC069_ADDR (0xb64140)
+#define NBL_SEC070_SIZE (1)
+#define NBL_SEC070_ADDR (0xb64144)
+#define NBL_SEC071_SIZE (512)
+#define NBL_SEC071_ADDR (0xb65000)
+#define NBL_SEC071_REGI(i) (0xb65000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC072_SIZE (32)
+#define NBL_SEC072_ADDR (0xb65800)
+#define NBL_SEC072_REGI(i) (0xb65800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC073_SIZE (1)
+#define NBL_SEC073_ADDR (0x8c210)
+#define NBL_SEC074_SIZE (1)
+#define NBL_SEC074_ADDR (0x85c210)
+#define NBL_SEC075_SIZE (4)
+#define NBL_SEC075_ADDR (0x8c1b0)
+#define NBL_SEC075_REGI(i) (0x8c1b0 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC076_SIZE (4)
+#define NBL_SEC076_ADDR (0x8c1c0)
+#define NBL_SEC076_REGI(i) (0x8c1c0 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC077_SIZE (4)
+#define NBL_SEC077_ADDR (0x85c1b0)
+#define NBL_SEC077_REGI(i) (0x85c1b0 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC078_SIZE (1)
+#define NBL_SEC078_ADDR (0x85c1ec)
+#define NBL_SEC079_SIZE (1)
+#define NBL_SEC079_ADDR (0x8c1ec)
+#define NBL_SEC080_SIZE (1)
+#define NBL_SEC080_ADDR (0xb04440)
+#define NBL_SEC081_SIZE (1)
+#define NBL_SEC081_ADDR (0xb04448)
+#define NBL_SEC082_SIZE (1)
+#define NBL_SEC082_ADDR (0xb14450)
+#define NBL_SEC083_SIZE (1)
+#define NBL_SEC083_ADDR (0xb24450)
+#define NBL_SEC084_SIZE (1)
+#define NBL_SEC084_ADDR (0xb34450)
+#define NBL_SEC085_SIZE (1)
+#define NBL_SEC085_ADDR (0xa04188)
+#define NBL_SEC086_SIZE (1)
+#define NBL_SEC086_ADDR (0xe74218)
+#define NBL_SEC087_SIZE (1)
+#define NBL_SEC087_ADDR (0xe7421c)
+#define NBL_SEC088_SIZE (1)
+#define NBL_SEC088_ADDR (0xe74220)
+#define NBL_SEC089_SIZE (1)
+#define NBL_SEC089_ADDR (0xe74224)
+#define NBL_SEC090_SIZE (1)
+#define NBL_SEC090_ADDR (0x75c22c)
+#define NBL_SEC091_SIZE (1)
+#define NBL_SEC091_ADDR (0x75c230)
+#define NBL_SEC092_SIZE (1)
+#define NBL_SEC092_ADDR (0x75c238)
+#define NBL_SEC093_SIZE (1)
+#define NBL_SEC093_ADDR (0x75c244)
+#define NBL_SEC094_SIZE (1)
+#define NBL_SEC094_ADDR (0x75c248)
+#define NBL_SEC095_SIZE (1)
+#define NBL_SEC095_ADDR (0x75c250)
+#define NBL_SEC096_SIZE (1)
+#define NBL_SEC096_ADDR (0x15c230)
+#define NBL_SEC097_SIZE (1)
+#define NBL_SEC097_ADDR (0x15c234)
+#define NBL_SEC098_SIZE (1)
+#define NBL_SEC098_ADDR (0x15c238)
+#define NBL_SEC099_SIZE (1)
+#define NBL_SEC099_ADDR (0x15c23c)
+#define NBL_SEC100_SIZE (1)
+#define NBL_SEC100_ADDR (0x15c244)
+#define NBL_SEC101_SIZE (1)
+#define NBL_SEC101_ADDR (0x15c248)
+#define NBL_SEC102_SIZE (1)
+#define NBL_SEC102_ADDR (0xb6432c)
+#define NBL_SEC103_SIZE (1)
+#define NBL_SEC103_ADDR (0xb64220)
+#define NBL_SEC104_SIZE (1)
+#define NBL_SEC104_ADDR (0xb44804)
+#define NBL_SEC105_SIZE (1)
+#define NBL_SEC105_ADDR (0xb44a00)
+#define NBL_SEC106_SIZE (1)
+#define NBL_SEC106_ADDR (0xe84210)
+#define NBL_SEC107_SIZE (1)
+#define NBL_SEC107_ADDR (0xe84214)
+#define NBL_SEC108_SIZE (1)
+#define NBL_SEC108_ADDR (0xe64228)
+#define NBL_SEC109_SIZE (1)
+#define NBL_SEC109_ADDR (0x65413c)
+#define NBL_SEC110_SIZE (1)
+#define NBL_SEC110_ADDR (0x984144)
+#define NBL_SEC111_SIZE (1)
+#define NBL_SEC111_ADDR (0x114130)
+#define NBL_SEC112_SIZE (1)
+#define NBL_SEC112_ADDR (0x714138)
+#define NBL_SEC113_SIZE (1)
+#define NBL_SEC113_ADDR (0x114134)
+#define NBL_SEC114_SIZE (1)
+#define NBL_SEC114_ADDR (0x71413c)
+#define NBL_SEC115_SIZE (1)
+#define NBL_SEC115_ADDR (0x90437c)
+#define NBL_SEC116_SIZE (32)
+#define NBL_SEC116_ADDR (0xb05000)
+#define NBL_SEC116_REGI(i) (0xb05000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC117_SIZE (1)
+#define NBL_SEC117_ADDR (0xb043e0)
+#define NBL_SEC118_SIZE (1)
+#define NBL_SEC118_ADDR (0xb043f0)
+#define NBL_SEC119_SIZE (5)
+#define NBL_SEC119_ADDR (0x8c230)
+#define NBL_SEC119_REGI(i) (0x8c230 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC120_SIZE (1)
+#define NBL_SEC120_ADDR (0x8c1f4)
+#define NBL_SEC121_SIZE (1)
+#define NBL_SEC121_ADDR (0x2046c4)
+#define NBL_SEC122_SIZE (1)
+#define NBL_SEC122_ADDR (0x85c1f4)
+#define NBL_SEC123_SIZE (1)
+#define NBL_SEC123_ADDR (0x75c194)
+#define NBL_SEC124_SIZE (256)
+#define NBL_SEC124_ADDR (0xa05000)
+#define NBL_SEC124_REGI(i) (0xa05000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC125_SIZE (256)
+#define NBL_SEC125_ADDR (0xa06000)
+#define NBL_SEC125_REGI(i) (0xa06000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC126_SIZE (256)
+#define NBL_SEC126_ADDR (0xa07000)
+#define NBL_SEC126_REGI(i) (0xa07000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC127_SIZE (1)
+#define NBL_SEC127_ADDR (0x75c204)
+#define NBL_SEC128_SIZE (1)
+#define NBL_SEC128_ADDR (0x15c204)
+#define NBL_SEC129_SIZE (1)
+#define NBL_SEC129_ADDR (0x75c208)
+#define NBL_SEC130_SIZE (1)
+#define NBL_SEC130_ADDR (0x15c208)
+#define NBL_SEC131_SIZE (1)
+#define NBL_SEC131_ADDR (0x75c20c)
+#define NBL_SEC132_SIZE (1)
+#define NBL_SEC132_ADDR (0x15c20c)
+#define NBL_SEC133_SIZE (1)
+#define NBL_SEC133_ADDR (0x75c210)
+#define NBL_SEC134_SIZE (1)
+#define NBL_SEC134_ADDR (0x15c210)
+#define NBL_SEC135_SIZE (1)
+#define NBL_SEC135_ADDR (0x75c214)
+#define NBL_SEC136_SIZE (1)
+#define NBL_SEC136_ADDR (0x15c214)
+#define NBL_SEC137_SIZE (32)
+#define NBL_SEC137_ADDR (0x15d000)
+#define NBL_SEC137_REGI(i) (0x15d000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC138_SIZE (32)
+#define NBL_SEC138_ADDR (0x75d000)
+#define NBL_SEC138_REGI(i) (0x75d000 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC139_SIZE (1)
+#define NBL_SEC139_ADDR (0x75c310)
+#define NBL_SEC140_SIZE (1)
+#define NBL_SEC140_ADDR (0x75c314)
+#define NBL_SEC141_SIZE (1)
+#define NBL_SEC141_ADDR (0x75c340)
+#define NBL_SEC142_SIZE (1)
+#define NBL_SEC142_ADDR (0x75c344)
+#define NBL_SEC143_SIZE (1)
+#define NBL_SEC143_ADDR (0x75c348)
+#define NBL_SEC144_SIZE (1)
+#define NBL_SEC144_ADDR (0x75c34c)
+#define NBL_SEC145_SIZE (32)
+#define NBL_SEC145_ADDR (0xb15800)
+#define NBL_SEC145_REGI(i) (0xb15800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC146_SIZE (32)
+#define NBL_SEC146_ADDR (0xb25800)
+#define NBL_SEC146_REGI(i) (0xb25800 + NBL_BYTES_IN_REG * (i))
+#define NBL_SEC147_SIZE (32)
+#define NBL_SEC147_ADDR (0xb35800)
+#define NBL_SEC147_REGI(i) (0xb35800 + NBL_BYTES_IN_REG * (i))
+
+static u32 nbl_sec046_1p_data[] = {
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xa0000000, 0x00077c2b, 0x005c0000,
+ 0x00000000, 0x00008100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0x00073029, 0x00480000,
+ 0x00000000, 0x00008100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0x00073029, 0x00480000,
+ 0x70000000, 0x00000020, 0x24140000, 0x00000020,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xa0000000, 0x00000009, 0x00000000,
+ 0x00000000, 0x00002100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00000009, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x70000000, 0x00000000, 0x20140000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x70000000, 0x00000000, 0x20140000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x38430000,
+ 0x70000006, 0x00000020, 0x24140000, 0x00000020,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x98cb1180, 0x6e36d469,
+ 0x9d8eb91c, 0x87e3ef47, 0xa2931288, 0x08405c5a,
+ 0x73865086, 0x00000080, 0x30140000, 0x00000080,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x000b3849, 0x38430000,
+ 0x00000006, 0x0000c100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00133889, 0x08400000,
+ 0x03865086, 0x4c016100, 0x00000014, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec071_1p_data[] = {
+ 0x00000000, 0x00000000, 0x00113d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7029b00, 0x00000000,
+ 0x00000000, 0x43000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x51e00000, 0x00000c9c,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00293d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x67089b00, 0x00000002,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x80000000, 0x00000000, 0xb1e00000, 0x0000189c,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00213d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7069b00, 0x00000001,
+ 0x00000000, 0x43000000, 0x014b0c70, 0x00000000,
+ 0x00000000, 0x00000000, 0x92600000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00213d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7069b00, 0x00000001,
+ 0x00000000, 0x43000000, 0x015b0c70, 0x00000000,
+ 0x00000000, 0x00000000, 0x92600000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00553d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe6d29a00, 0x000149c4,
+ 0x00000000, 0x4b000000, 0x00000004, 0x00000000,
+ 0x80000000, 0x00022200, 0x62600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00553d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe6d2c000, 0x000149c4,
+ 0x00000000, 0x5b000000, 0x00000004, 0x00000000,
+ 0x80000000, 0x00022200, 0x62600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x006d3d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x64d49200, 0x5e556945,
+ 0xc666d89a, 0x4b0001a9, 0x00004c84, 0x00000000,
+ 0x80000000, 0x00022200, 0xc2600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x006d3d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x6ed4ba00, 0x5ef56bc5,
+ 0xc666d8c0, 0x5b0001a9, 0x00004dc4, 0x00000000,
+ 0x80000000, 0x00022200, 0xc2600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000002, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00700000, 0x00000000, 0x08028000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec046_2p_data[] = {
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xa0000000, 0x00077c2b, 0x005c0000,
+ 0x00000000, 0x00008100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0x00073029, 0x00480000,
+ 0x00000000, 0x00008100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0x00073029, 0x00480000,
+ 0x70000000, 0x00000020, 0x04140000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xa0000000, 0x00000009, 0x00000000,
+ 0x00000000, 0x00002100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00000009, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x70000000, 0x00000000, 0x00140000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x70000000, 0x00000000, 0x00140000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x38430000,
+ 0x70000006, 0x00000020, 0x04140000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x98cb1180, 0x6e36d469,
+ 0x9d8eb91c, 0x87e3ef47, 0xa2931288, 0x08405c5a,
+ 0x73865086, 0x00000080, 0x10140000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x000b3849, 0x38430000,
+ 0x00000006, 0x0000c100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00133889, 0x08400000,
+ 0x03865086, 0x4c016100, 0x00000014, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec071_2p_data[] = {
+ 0x00000000, 0x00000000, 0x00113d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7029b00, 0x00000000,
+ 0x00000000, 0x43000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x51e00000, 0x00000c9c,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00293d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x67089b00, 0x00000002,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x80000000, 0x00000000, 0xb1e00000, 0x0000189c,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00213d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7069b00, 0x00000001,
+ 0x00000000, 0x43000000, 0x014b0c70, 0x00000000,
+ 0x00000000, 0x00000000, 0x92600000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00213d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7069b00, 0x00000001,
+ 0x00000000, 0x43000000, 0x015b0c70, 0x00000000,
+ 0x00000000, 0x00000000, 0x92600000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00553d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe6d29a00, 0x000149c4,
+ 0x00000000, 0x4b000000, 0x00000004, 0x00000000,
+ 0x80000000, 0x00022200, 0x62600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00553d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe6d2c000, 0x000149c4,
+ 0x00000000, 0x5b000000, 0x00000004, 0x00000000,
+ 0x80000000, 0x00022200, 0x62600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x006d3d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x64d49200, 0x5e556945,
+ 0xc666d89a, 0x4b0001a9, 0x00004c84, 0x00000000,
+ 0x80000000, 0x00022200, 0xc2600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x006d3d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x6ed4ba00, 0x5ef56bc5,
+ 0xc666d8c0, 0x5b0001a9, 0x00004dc4, 0x00000000,
+ 0x80000000, 0x00022200, 0xc2600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000002, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00700000, 0x00000000, 0x00028000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec006_data[] = {
+ 0x81008100, 0x00000001, 0x88a88100, 0x00000001,
+ 0x810088a8, 0x00000001, 0x88a888a8, 0x00000001,
+ 0x81000000, 0x00000001, 0x88a80000, 0x00000001,
+ 0x00000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x08004000, 0x00000001, 0x86dd6000, 0x00000001,
+ 0x81000000, 0x00000001, 0x88a80000, 0x00000001,
+ 0x08060000, 0x00000001, 0x80350000, 0x00000001,
+ 0x88080000, 0x00000001, 0x88f70000, 0x00000001,
+ 0x88cc0000, 0x00000001, 0x88090000, 0x00000001,
+ 0x89150000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000001,
+ 0x11006000, 0x00000001, 0x06006000, 0x00000001,
+ 0x02006000, 0x00000001, 0x3a006000, 0x00000001,
+ 0x2f006000, 0x00000001, 0x84006000, 0x00000001,
+ 0x32006000, 0x00000001, 0x2c006000, 0x00000001,
+ 0x3c006000, 0x00000001, 0x2b006000, 0x00000001,
+ 0x00006000, 0x00000001, 0x00004000, 0x00000001,
+ 0x00004000, 0x00000001, 0x20004000, 0x00000001,
+ 0x40004000, 0x00000001, 0x00000000, 0x00000001,
+ 0x11000000, 0x00000001, 0x06000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x2f000000, 0x00000001, 0x84000000, 0x00000001,
+ 0x32000000, 0x00000001, 0x2c000000, 0x00000001,
+ 0x2b000000, 0x00000001, 0x3c000000, 0x00000001,
+ 0x3b000000, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x11000000, 0x00000001, 0x06000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x2f000000, 0x00000001, 0x84000000, 0x00000001,
+ 0x32000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x2c000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x2b000000, 0x00000001, 0x3c000000, 0x00000001,
+ 0x3b000000, 0x00000001, 0x00000000, 0x00000001,
+ 0x06001072, 0x00000001, 0x06000000, 0x00000001,
+ 0x110017c1, 0x00000001, 0x110012b7, 0x00000001,
+ 0x110012b5, 0x00000001, 0x01000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x11000043, 0x00000001, 0x11000044, 0x00000001,
+ 0x11000222, 0x00000001, 0x11000000, 0x00000001,
+ 0x2f006558, 0x00000001, 0x32000000, 0x00000001,
+ 0x84000000, 0x00000001, 0x00000000, 0x00000001,
+ 0x65582000, 0x00000001, 0x65583000, 0x00000001,
+ 0x6558a000, 0x00000001, 0x6558b000, 0x00000001,
+ 0x65580000, 0x00000001, 0x12b50000, 0x00000001,
+ 0x02000102, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x65580000, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x81008100, 0x00000001, 0x88a88100, 0x00000001,
+ 0x810088a8, 0x00000001, 0x88a888a8, 0x00000001,
+ 0x81000000, 0x00000001, 0x88a80000, 0x00000001,
+ 0x00000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x08004000, 0x00000001, 0x86dd6000, 0x00000001,
+ 0x81000000, 0x00000001, 0x88a80000, 0x00000001,
+ 0x08060000, 0x00000001, 0x80350000, 0x00000001,
+ 0x88080000, 0x00000001, 0x88f70000, 0x00000001,
+ 0x88cc0000, 0x00000001, 0x88090000, 0x00000001,
+ 0x89150000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000001,
+ 0x11006000, 0x00000001, 0x06006000, 0x00000001,
+ 0x02006000, 0x00000001, 0x3a006000, 0x00000001,
+ 0x2f006000, 0x00000001, 0x84006000, 0x00000001,
+ 0x32006000, 0x00000001, 0x2c006000, 0x00000001,
+ 0x3c006000, 0x00000001, 0x2b006000, 0x00000001,
+ 0x00006000, 0x00000001, 0x00004000, 0x00000001,
+ 0x00004000, 0x00000001, 0x20004000, 0x00000001,
+ 0x40004000, 0x00000001, 0x00000000, 0x00000001,
+ 0x11000000, 0x00000001, 0x06000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x2f000000, 0x00000001, 0x84000000, 0x00000001,
+ 0x32000000, 0x00000001, 0x2c000000, 0x00000001,
+ 0x2b000000, 0x00000001, 0x3c000000, 0x00000001,
+ 0x3b000000, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x11000000, 0x00000001, 0x06000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x2f000000, 0x00000001, 0x84000000, 0x00000001,
+ 0x32000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x2c000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x2b000000, 0x00000001, 0x3c000000, 0x00000001,
+ 0x3b000000, 0x00000001, 0x00000000, 0x00000001,
+ 0x06001072, 0x00000001, 0x06000000, 0x00000001,
+ 0x110012b7, 0x00000001, 0x01000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x32000000, 0x00000001, 0x84000000, 0x00000001,
+ 0x11000043, 0x00000001, 0x11000044, 0x00000001,
+ 0x11000222, 0x00000001, 0x11000000, 0x00000001,
+ 0x2f006558, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec007_data[] = {
+ 0x10001000, 0x00001000, 0x10000000, 0x00000000,
+ 0x1000ffff, 0x0000ffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00000fff, 0x00000fff, 0x1000ffff, 0x0000ffff,
+ 0x0000ffff, 0x0000ffff, 0x0000ffff, 0x0000ffff,
+ 0x0000ffff, 0x0000ffff, 0x0000ffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff,
+ 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff,
+ 0x00ff0fff, 0x10ff0fff, 0xffff0fff, 0x00000fff,
+ 0x1fff0fff, 0x1fff0fff, 0x1fff0fff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0x00ffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0x00ff0000, 0x00ffffff, 0x00ff0000, 0x00ff0000,
+ 0x00ff0000, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ff0000, 0x00ff0000, 0x00ff0001, 0x00ffffff,
+ 0x00ff0000, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0x00000fff, 0x00000fff, 0x00000fff, 0x00000fff,
+ 0x00000fff, 0x0000ffff, 0xc0ff0000, 0xc0ffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x0000ffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x10001000, 0x00001000, 0x10000000, 0x00000000,
+ 0x1000ffff, 0x0000ffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00000fff, 0x00000fff, 0x1000ffff, 0x0000ffff,
+ 0x0000ffff, 0x0000ffff, 0x0000ffff, 0x0000ffff,
+ 0x0000ffff, 0x0000ffff, 0x0000ffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff,
+ 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff,
+ 0x00ff0fff, 0x10ff0fff, 0xffff0fff, 0x00000fff,
+ 0x1fff0fff, 0x1fff0fff, 0x1fff0fff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0x00ffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0x00ff0000, 0x00ffffff, 0x00ff0000, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ff0000, 0x00ff0000, 0x00ff0001, 0x00ffffff,
+ 0x00ff0000, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+};
+
+static u32 nbl_sec008_data[] = {
+ 0x00809190, 0x16009496, 0x00000100, 0x00000000,
+ 0x00809190, 0x16009496, 0x00000100, 0x00000000,
+ 0x00809190, 0x16009496, 0x00000100, 0x00000000,
+ 0x00809190, 0x16009496, 0x00000100, 0x00000000,
+ 0x00800090, 0x12009092, 0x00000100, 0x00000000,
+ 0x00800090, 0x12009092, 0x00000100, 0x00000000,
+ 0x00800000, 0x0e008c8e, 0x00000100, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x08909581, 0x00008680, 0x00000200, 0x00000000,
+ 0x10900082, 0x28008680, 0x00000200, 0x00000000,
+ 0x809b0093, 0x00000000, 0x00000100, 0x00000000,
+ 0x809b0093, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b0000, 0x00000000, 0x00000100, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x009b0000, 0x00000000, 0x00000100, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000200, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000200, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000200, 0x00000000,
+ 0x40000000, 0x01c180c2, 0x00000300, 0x00000000,
+ 0x00000000, 0x00a089c2, 0x000005f0, 0x00000000,
+ 0x000b0085, 0x00a00000, 0x000002f0, 0x00000000,
+ 0x000b0085, 0x00a00000, 0x000002f0, 0x00000000,
+ 0x00000000, 0x00a089c2, 0x000005f0, 0x00000000,
+ 0x000b0000, 0x00000000, 0x00000200, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000300, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000300, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000300, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000300, 0x00000000,
+ 0x40000000, 0x01c180c2, 0x00000400, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000400, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000400, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000400, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000400, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000400, 0x00000000,
+ 0x01ab0083, 0x0ca00000, 0x0000050f, 0x00000000,
+ 0x01ab0083, 0x0ca00000, 0x0000050f, 0x00000000,
+ 0x02a00084, 0x08008890, 0x00000600, 0x00000000,
+ 0x02ab848a, 0x08000000, 0x00000500, 0x00000000,
+ 0x02a00084, 0x10008200, 0x00000600, 0x00000000,
+ 0x00ab8f8e, 0x04000000, 0x00000500, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000500, 0x00000000,
+ 0x00ab8f8e, 0x04000000, 0x00000500, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000500, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000500, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000500, 0x00000000,
+ 0x02ab0084, 0x08000000, 0x00000500, 0x00000000,
+ 0x00a00000, 0x04008280, 0x00000600, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000500, 0x00000000,
+ 0x04ab8e84, 0x0c000000, 0x00000500, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000500, 0x00000000,
+ 0x00000000, 0x0400ccd0, 0x00000800, 0x00000000,
+ 0x00000000, 0x0800ccd0, 0x00000800, 0x00000000,
+ 0x00000000, 0x0800ccd0, 0x00000800, 0x00000000,
+ 0x00000000, 0x0c00ccd0, 0x00000800, 0x00000000,
+ 0x00000000, 0x0000ccd0, 0x00000800, 0x00000000,
+ 0x00000000, 0x0000ccd0, 0x00000800, 0x00000000,
+ 0x00000000, 0x10008200, 0x00000700, 0x00000000,
+ 0x00000000, 0x08008200, 0x00000700, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x0000ccd0, 0x00000800, 0x00000000,
+ 0x00000000, 0x0000ccd0, 0x00000800, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00808786, 0x16009496, 0x00000900, 0x00000000,
+ 0x00808786, 0x16009496, 0x00000900, 0x00000000,
+ 0x00808786, 0x16009496, 0x00000900, 0x00000000,
+ 0x00808786, 0x16009496, 0x00000900, 0x00000000,
+ 0x00800086, 0x12009092, 0x00000900, 0x00000000,
+ 0x00800086, 0x12009092, 0x00000900, 0x00000000,
+ 0x00800000, 0x0e008c8e, 0x00000900, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x08908192, 0x00008680, 0x00000a00, 0x00000000,
+ 0x10908292, 0x28008680, 0x00000a00, 0x00000000,
+ 0x809b9392, 0x00000000, 0x00000900, 0x00000000,
+ 0x809b9392, 0x00000000, 0x00000900, 0x00000000,
+ 0x009b8f92, 0x00000000, 0x00000900, 0x00000000,
+ 0x009b8f92, 0x00000000, 0x00000900, 0x00000000,
+ 0x009b8f92, 0x00000000, 0x00000900, 0x00000000,
+ 0x009b8f92, 0x00000000, 0x00000900, 0x00000000,
+ 0x009b8f92, 0x00000000, 0x00000900, 0x00000000,
+ 0x009b8f92, 0x00000000, 0x00000900, 0x00000000,
+ 0x009b0092, 0x00000000, 0x00000900, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x009b0092, 0x00000000, 0x00000900, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000a00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000a00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000a00, 0x00000000,
+ 0x40000000, 0x01c180c2, 0x00000b00, 0x00000000,
+ 0x00000000, 0x00a089c2, 0x00000df0, 0x00000000,
+ 0x000b0085, 0x00a00000, 0x00000af0, 0x00000000,
+ 0x000b0085, 0x00a00000, 0x00000af0, 0x00000000,
+ 0x00000000, 0x00a089c2, 0x00000df0, 0x00000000,
+ 0x000b0000, 0x00000000, 0x00000a00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000b00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000b00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000b00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000b00, 0x00000000,
+ 0x40000000, 0x01c180c2, 0x00000c00, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000c00, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000c00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000c00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000c00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000c00, 0x00000000,
+ 0x01ab0083, 0x0ca00000, 0x00000d0f, 0x00000000,
+ 0x01ab0083, 0x0ca00000, 0x00000d0f, 0x00000000,
+ 0x02ab8a84, 0x08000000, 0x00000d00, 0x00000000,
+ 0x00ab8f8e, 0x04000000, 0x00000d00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000d00, 0x00000000,
+ 0x00ab8f8e, 0x04000000, 0x00000d00, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000d00, 0x00000000,
+ 0x04ab8e84, 0x0c000000, 0x00000d00, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000d00, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000d00, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000d00, 0x00000000,
+ 0x02ab0084, 0x08000000, 0x00000d00, 0x00000000,
+ 0x00ab0000, 0x04000000, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec009_data[] = {
+ 0x00000000, 0x00000060, 0x00000000, 0x00000090,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000050, 0x00000000, 0x000000a0,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x000000a0, 0x00000000, 0x00000050,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000800, 0x00000000, 0x00000700,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000900, 0x00000000, 0x00000600,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00008000, 0x00000000, 0x00007000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00009000, 0x00000000, 0x00006000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x0000a000, 0x00000000, 0x00005000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x000c0000, 0x00000000, 0x00030000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x000d0000, 0x00000000, 0x00020000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x000e0000, 0x00000000, 0x00010000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000040, 0x00000000, 0x000000b0,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000070, 0x00000000, 0x00000080,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000090, 0x00000000, 0x00000060,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000080, 0x00000000, 0x00000070,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000700, 0x00000000, 0x00000800,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00007000, 0x00000000, 0x00008000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00080000, 0x00000000, 0x00070000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000c00, 0x00000000, 0x00000300,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000d00, 0x00000000, 0x00000200,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00600000, 0x00000000, 0x00900000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00d00000, 0x00000000, 0x00200000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00500000, 0x00000000, 0x00a00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00700000, 0x00000000, 0x00800000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00e00000, 0x00000000, 0x00100000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00f00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00f00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00100000, 0x00000000, 0x00e00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00300000, 0x00000000, 0x00c00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00800000, 0x00000000, 0x00700000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00900000, 0x00000000, 0x00600000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00a00000, 0x00000000, 0x00500000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00b00000, 0x00000000, 0x00400000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000060, 0x00400000, 0x00000090, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000050, 0x00400000, 0x000000a0, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000000a0, 0x00400000, 0x00000050, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000800, 0x00400000, 0x00000700, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000900, 0x00400000, 0x00000600, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00008000, 0x00400000, 0x00007000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00009000, 0x00400000, 0x00006000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x0000a000, 0x00400000, 0x00005000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000c0000, 0x00400000, 0x00030000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000d0000, 0x00400000, 0x00020000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000e0000, 0x00400000, 0x00010000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000070, 0x00400000, 0x00000080, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000700, 0x00400000, 0x00000800, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00007000, 0x00400000, 0x00008000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080000, 0x00400000, 0x00070000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000c00, 0x00400000, 0x00000300, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000d00, 0x00400000, 0x00000200, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000040, 0x00400000, 0x000000b0, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000090, 0x00400000, 0x00000060, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000080, 0x00400000, 0x00000070, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000060, 0x06000000, 0x00000090, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000060, 0x07000000, 0x00000090, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000050, 0x06000000, 0x000000a0, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000050, 0x07000000, 0x000000a0, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000000a0, 0x06000000, 0x00000050, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000000a0, 0x07000000, 0x00000050, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000800, 0x06000000, 0x00000700, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000900, 0x06000000, 0x00000600, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00008000, 0x06000000, 0x00007000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00009000, 0x06000000, 0x00006000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x0000a000, 0x06000000, 0x00005000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000c0000, 0x06000000, 0x00030000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000d0000, 0x06000000, 0x00020000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000e0000, 0x06000000, 0x00010000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000800, 0x07000000, 0x00000700, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000900, 0x07000000, 0x00000600, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00008000, 0x07000000, 0x00007000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00009000, 0x07000000, 0x00006000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x0000a000, 0x07000000, 0x00005000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000c0000, 0x07000000, 0x00030000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000d0000, 0x07000000, 0x00020000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000e0000, 0x07000000, 0x00010000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000070, 0x06000000, 0x00000080, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000070, 0x07000000, 0x00000080, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000700, 0x06000000, 0x00000800, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00007000, 0x06000000, 0x00008000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080000, 0x06000000, 0x00070000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000c00, 0x06000000, 0x00000300, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000d00, 0x06000000, 0x00000200, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000700, 0x07000000, 0x00000800, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00007000, 0x07000000, 0x00008000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080000, 0x07000000, 0x00070000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000c00, 0x07000000, 0x00000300, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000d00, 0x07000000, 0x00000200, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000040, 0x06000000, 0x000000b0, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000040, 0x07000000, 0x000000b0, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000090, 0x06000000, 0x00000060, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000090, 0x07000000, 0x00000060, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000080, 0x06000000, 0x00000070, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000080, 0x07000000, 0x00000070, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000060, 0x00c00000, 0x00000090, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000050, 0x00c00000, 0x000000a0, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000000a0, 0x00c00000, 0x00000050, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000800, 0x00c00000, 0x00000700, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000900, 0x00c00000, 0x00000600, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00008000, 0x00c00000, 0x00007000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00009000, 0x00c00000, 0x00006000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x0000a000, 0x00c00000, 0x00005000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000c0000, 0x00c00000, 0x00030000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000d0000, 0x00c00000, 0x00020000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000e0000, 0x00c00000, 0x00010000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000070, 0x00c00000, 0x00000080, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000700, 0x00c00000, 0x00000800, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00007000, 0x00c00000, 0x00008000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080000, 0x00c00000, 0x00070000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000c00, 0x00c00000, 0x00000300, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000d00, 0x00c00000, 0x00000200, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000040, 0x00c00000, 0x000000b0, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000090, 0x00c00000, 0x00000060, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000080, 0x00c00000, 0x00000070, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00400000, 0x00400000, 0x00b00000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00600000, 0x00400000, 0x00900000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00300000, 0x00400000, 0x00c00000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00500000, 0x00400000, 0x00a00000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00700000, 0x00400000, 0x00800000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00200000, 0x00400000, 0x00d00000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00800000, 0x00400000, 0x00700000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00900000, 0x00400000, 0x00600000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00a00000, 0x00400000, 0x00500000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00b00000, 0x00400000, 0x00400000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00400000, 0x00f00000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00400000, 0x00f00000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00100000, 0x00400000, 0x00e00000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00400000, 0x06000000, 0x00b00000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00400000, 0x07000000, 0x00b00000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00600000, 0x06000000, 0x00900000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00600000, 0x07000000, 0x00900000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00300000, 0x06000000, 0x00c00000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00300000, 0x07000000, 0x00c00000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00500000, 0x06000000, 0x00a00000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00500000, 0x07000000, 0x00a00000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00700000, 0x06000000, 0x00800000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00700000, 0x07000000, 0x00800000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00200000, 0x06000000, 0x00d00000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00200000, 0x07000000, 0x00d00000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00800000, 0x06000000, 0x00700000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00900000, 0x06000000, 0x00600000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00a00000, 0x06000000, 0x00500000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00b00000, 0x06000000, 0x00400000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00800000, 0x07000000, 0x00700000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00900000, 0x07000000, 0x00600000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00a00000, 0x07000000, 0x00500000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00b00000, 0x07000000, 0x00400000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x06000000, 0x00f00000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x07000000, 0x00f00000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x06000000, 0x00f00000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00100000, 0x06000000, 0x00e00000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x07000000, 0x00f00000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00100000, 0x07000000, 0x00e00000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00400000, 0x00c00000, 0x00b00000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00600000, 0x00c00000, 0x00900000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00300000, 0x00c00000, 0x00c00000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00500000, 0x00c00000, 0x00a00000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00700000, 0x00c00000, 0x00800000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00200000, 0x00c00000, 0x00d00000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00800000, 0x00c00000, 0x00700000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00900000, 0x00c00000, 0x00600000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00a00000, 0x00c00000, 0x00500000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00b00000, 0x00c00000, 0x00400000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00c00000, 0x00f00000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00c00000, 0x00f00000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00100000, 0x00c00000, 0x00e00000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000f0000, 0x00400000, 0x00000000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00f00000, 0x00400000, 0x00000000, 0x00b00000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000f0000, 0x06000000, 0x00000000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00f00000, 0x06000000, 0x00000000, 0x09000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000f0000, 0x07000000, 0x00000000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00f00000, 0x07000000, 0x00000000, 0x08000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x000f0000, 0x00c00000, 0x00000000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00f00000, 0x00c00000, 0x00000000, 0x00300000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x000f0000, 0x00000000, 0x00000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00f00000, 0x00000000, 0x00000000,
+ 0x00000001, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec010_data[] = {
+ 0x0000000a, 0x0000000a, 0x0000000a, 0x0000000a,
+ 0x0000000a, 0x0000000a, 0x0000000a, 0x0000000a,
+ 0x0000000a, 0x0000000a, 0x0000000a, 0x00000000,
+ 0x0000000b, 0x00000008, 0x00000009, 0x0000000f,
+ 0x0000000f, 0x0000000f, 0x0000000f, 0x0000000f,
+ 0x0000000c, 0x0000000d, 0x00000001, 0x00000001,
+ 0x0000000e, 0x00000005, 0x00000002, 0x00000002,
+ 0x00000004, 0x00000003, 0x00000003, 0x00000003,
+ 0x00000003, 0x00000040, 0x00000040, 0x00000040,
+ 0x00000040, 0x00000040, 0x00000040, 0x00000040,
+ 0x00000040, 0x00000040, 0x00000040, 0x00000040,
+ 0x00000045, 0x00000044, 0x00000044, 0x00000044,
+ 0x00000044, 0x00000044, 0x00000041, 0x00000042,
+ 0x00000043, 0x00000046, 0x00000046, 0x00000046,
+ 0x00000046, 0x00000046, 0x00000046, 0x00000046,
+ 0x00000046, 0x00000046, 0x00000046, 0x00000046,
+ 0x00000046, 0x00000046, 0x00000046, 0x00000046,
+ 0x00000046, 0x00000046, 0x00000046, 0x00000046,
+ 0x00000046, 0x00000046, 0x00000046, 0x0000004b,
+ 0x0000004b, 0x0000004a, 0x0000004a, 0x0000004a,
+ 0x0000004a, 0x0000004a, 0x0000004a, 0x0000004a,
+ 0x0000004a, 0x0000004a, 0x0000004a, 0x00000047,
+ 0x00000047, 0x00000048, 0x00000048, 0x00000049,
+ 0x00000049, 0x0000004c, 0x0000004c, 0x0000004c,
+ 0x0000004c, 0x0000004c, 0x0000004c, 0x0000004c,
+ 0x0000004c, 0x0000004c, 0x0000004c, 0x0000004c,
+ 0x00000051, 0x00000050, 0x00000050, 0x00000050,
+ 0x00000050, 0x00000050, 0x0000004d, 0x0000004e,
+ 0x0000004f, 0x00000052, 0x00000053, 0x00000054,
+ 0x00000054, 0x00000055, 0x00000056, 0x00000057,
+ 0x00000057, 0x00000057, 0x00000057, 0x00000058,
+ 0x00000059, 0x00000059, 0x0000005a, 0x0000005a,
+ 0x0000005b, 0x0000005b, 0x0000005c, 0x0000005c,
+ 0x0000005c, 0x0000005c, 0x0000005d, 0x0000005d,
+ 0x0000005e, 0x0000005e, 0x0000005f, 0x0000005f,
+ 0x0000005f, 0x0000005f, 0x0000005f, 0x0000005f,
+ 0x0000005f, 0x0000005f, 0x00000060, 0x00000060,
+ 0x00000061, 0x00000061, 0x00000061, 0x00000061,
+ 0x00000062, 0x00000063, 0x00000064, 0x00000064,
+ 0x00000065, 0x00000066, 0x00000067, 0x00000067,
+ 0x00000067, 0x00000067, 0x00000068, 0x00000069,
+ 0x00000069, 0x00000040, 0x00000040, 0x00000046,
+ 0x00000046, 0x00000046, 0x00000046, 0x0000004c,
+ 0x0000004c, 0x0000000a, 0x0000000a, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec011_data[] = {
+ 0x0008002c, 0x00080234, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080230,
+ 0x00080332, 0x0008063c, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0008002c, 0x00080234, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080230,
+ 0x00080332, 0x00080738, 0x0008083c, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0008002c, 0x00080234, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080230,
+ 0x00080332, 0x00080738, 0x0008093a, 0x00080a3c,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080634, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080730, 0x00080834, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080730, 0x00080932, 0x00080a34,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00090200, 0x00090304, 0x00090408, 0x0009050c,
+ 0x00090610, 0x00090714, 0x00090818, 0x0009121c,
+ 0x0009131e, 0x00000000, 0x00000000, 0x00000000,
+ 0x00090644, 0x00000000, 0x000d8045, 0x000d4145,
+ 0x0009030c, 0x0009041c, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00090145, 0x00090944, 0x00000000, 0x00000000,
+ 0x0009061c, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x0009033a,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00090200, 0x00090304, 0x00090408, 0x0009050c,
+ 0x00090610, 0x00090714, 0x00090818, 0x0009121c,
+ 0x0009131e, 0x00000000, 0x00000000, 0x00000000,
+ 0x0009063d, 0x00090740, 0x000d803f, 0x000d413f,
+ 0x0009030c, 0x0009041c, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0009013f, 0x00090840, 0x000dc93d, 0x000d093d,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0324, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a003e,
+ 0x000a0140, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0324, 0x000a0520, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a003e,
+ 0x000a0140, 0x000a0842, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0124, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0224, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a003c, 0x000a0037, 0x000ec139, 0x000e0139,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x000a0742, 0x00000000, 0x00000000,
+ 0x000a0d41, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0d3e, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0037, 0x000a0139, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080634, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080730, 0x00080834, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080730, 0x00080932, 0x00080a34,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0009061c, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x0009033a,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00090200, 0x00090304, 0x00090408, 0x0009050c,
+ 0x00090610, 0x00090714, 0x00090818, 0x0009121c,
+ 0x0009131e, 0x00000000, 0x00000000, 0x00000000,
+ 0x0009063d, 0x00090740, 0x000d803f, 0x000d413f,
+ 0x0009030c, 0x0009041c, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0009013f, 0x00090840, 0x000dc93d, 0x000d093d,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a003c, 0x000a0037, 0x000ec139, 0x000e0139,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x000a0742, 0x00000000, 0x00000000,
+ 0x000a0d41, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0d3e, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0037, 0x000a0139, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec012_data[] = {
+ 0x00000006, 0x00000001, 0x00000004, 0x00000001,
+ 0x00000006, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000004, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000010, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000040, 0x00000001, 0x00000010, 0x00000001,
+ 0x00000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x06200000, 0x00000001, 0x00c00000, 0x00000001,
+ 0x02c00000, 0x00000001, 0x00200000, 0x00000001,
+ 0x00400000, 0x00000001, 0x00700000, 0x00000001,
+ 0x00300000, 0x00000001, 0x00000000, 0x00000001,
+ 0x00a00000, 0x00000001, 0x00b00000, 0x00000001,
+ 0x00e00000, 0x00000001, 0x00500000, 0x00000001,
+ 0x00800000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000004, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000040, 0x00000001, 0x00000010, 0x00000001,
+ 0x00000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00500000, 0x00000001, 0x00700000, 0x00000001,
+ 0x00a00000, 0x00000001, 0x00b00000, 0x00000001,
+ 0x00200000, 0x00000001, 0x00000000, 0x00000001,
+ 0x00300000, 0x00000001, 0x00800000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec013_data[] = {
+ 0xf7fffff0, 0xf7fffff1, 0xfffffff0, 0xf7fffff3,
+ 0xfffffff1, 0xfffffff3, 0xffffffff, 0xffffffff,
+ 0xf7ffff0f, 0xf7ffff0f, 0xffffff0f, 0xffffff0f,
+ 0xffffff0f, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x100fffff, 0xf10fffff, 0xf10fffff, 0xf70fffff,
+ 0xf70fffff, 0xff0fffff, 0xff0fffff, 0xff1fffff,
+ 0xff0fffff, 0xff0fffff, 0xff0fffff, 0xff0fffff,
+ 0xff1fffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xfffffff1, 0xfffffff3, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffff0f, 0xffffff0f, 0xffffff0f, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xff0fffff, 0xff0fffff, 0xff0fffff, 0xff0fffff,
+ 0xff0fffff, 0xff1fffff, 0xff0fffff, 0xff1fffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+};
+
+static u32 nbl_sec014_data[] = {
+ 0x00000000, 0x00000001, 0x00000003, 0x00000002,
+ 0x00000004, 0x00000005, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000001, 0x00000002, 0x00000003,
+ 0x00000004, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000001, 0x00000002, 0x00000003, 0x00000000,
+ 0x00000000, 0x00000004, 0x00000005, 0x00000006,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000001, 0x00000002, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000001, 0x00000002, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000001, 0x00000001, 0x00000001,
+ 0x00000002, 0x00000003, 0x00000004, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec022_data[] = {
+ 0x81008100, 0x00000001, 0x88a88100, 0x00000001,
+ 0x810088a8, 0x00000001, 0x88a888a8, 0x00000001,
+ 0x81000000, 0x00000001, 0x88a80000, 0x00000001,
+ 0x00000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x08004000, 0x00000001, 0x86dd6000, 0x00000001,
+ 0x81000000, 0x00000001, 0x88a80000, 0x00000001,
+ 0x08060000, 0x00000001, 0x80350000, 0x00000001,
+ 0x88080000, 0x00000001, 0x88f70000, 0x00000001,
+ 0x88cc0000, 0x00000001, 0x88090000, 0x00000001,
+ 0x89150000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000001,
+ 0x11006000, 0x00000001, 0x06006000, 0x00000001,
+ 0x02006000, 0x00000001, 0x3a006000, 0x00000001,
+ 0x2f006000, 0x00000001, 0x84006000, 0x00000001,
+ 0x32006000, 0x00000001, 0x2c006000, 0x00000001,
+ 0x3c006000, 0x00000001, 0x2b006000, 0x00000001,
+ 0x00006000, 0x00000001, 0x00004000, 0x00000001,
+ 0x00004000, 0x00000001, 0x20004000, 0x00000001,
+ 0x40004000, 0x00000001, 0x00000000, 0x00000001,
+ 0x11000000, 0x00000001, 0x06000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x2f000000, 0x00000001, 0x84000000, 0x00000001,
+ 0x32000000, 0x00000001, 0x2c000000, 0x00000001,
+ 0x2b000000, 0x00000001, 0x3c000000, 0x00000001,
+ 0x3b000000, 0x00000001, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x11000000, 0x00000001, 0x06000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x2f000000, 0x00000001, 0x84000000, 0x00000001,
+ 0x32000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x2c000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x2b000000, 0x00000001, 0x3c000000, 0x00000001,
+ 0x3b000000, 0x00000001, 0x00000000, 0x00000001,
+ 0x06001072, 0x00000001, 0x06000000, 0x00000001,
+ 0x110012b7, 0x00000001, 0x01000000, 0x00000001,
+ 0x02000000, 0x00000001, 0x3a000000, 0x00000001,
+ 0x32000000, 0x00000001, 0x84000000, 0x00000001,
+ 0x11000043, 0x00000001, 0x11000044, 0x00000001,
+ 0x11000222, 0x00000001, 0x11000000, 0x00000001,
+ 0x2f006558, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec023_data[] = {
+ 0x10001000, 0x00001000, 0x10000000, 0x00000000,
+ 0x1000ffff, 0x0000ffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00000fff, 0x00000fff, 0x1000ffff, 0x0000ffff,
+ 0x0000ffff, 0x0000ffff, 0x0000ffff, 0x0000ffff,
+ 0x0000ffff, 0x0000ffff, 0x0000ffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff,
+ 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff,
+ 0x00ff0fff, 0x10ff0fff, 0xffff0fff, 0x00000fff,
+ 0x1fff0fff, 0x1fff0fff, 0x1fff0fff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0x00ffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff,
+ 0x00ff0000, 0x00ffffff, 0x00ff0000, 0x00ffffff,
+ 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff,
+ 0x00ff0000, 0x00ff0000, 0x00ff0001, 0x00ffffff,
+ 0x00ff0000, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+};
+
+static u32 nbl_sec024_data[] = {
+ 0x00809190, 0x16009496, 0x00000100, 0x00000000,
+ 0x00809190, 0x16009496, 0x00000100, 0x00000000,
+ 0x00809190, 0x16009496, 0x00000100, 0x00000000,
+ 0x00809190, 0x16009496, 0x00000100, 0x00000000,
+ 0x00800090, 0x12009092, 0x00000100, 0x00000000,
+ 0x00800090, 0x12009092, 0x00000100, 0x00000000,
+ 0x00800000, 0x0e008c8e, 0x00000100, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x08900081, 0x00008680, 0x00000200, 0x00000000,
+ 0x10900082, 0x28008680, 0x00000200, 0x00000000,
+ 0x809b0093, 0x00000000, 0x00000100, 0x00000000,
+ 0x809b0093, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b008f, 0x00000000, 0x00000100, 0x00000000,
+ 0x009b0000, 0x00000000, 0x00000100, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x009b0000, 0x00000000, 0x00000100, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000200, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000200, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000200, 0x00000000,
+ 0x40000000, 0x01c180c2, 0x00000300, 0x00000000,
+ 0x00000000, 0x00a089c2, 0x000005f0, 0x00000000,
+ 0x000b0085, 0x00a00000, 0x000002f0, 0x00000000,
+ 0x000b0085, 0x00a00000, 0x000002f0, 0x00000000,
+ 0x00000000, 0x00a089c2, 0x000005f0, 0x00000000,
+ 0x000b0000, 0x00000000, 0x00000200, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000300, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000300, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000300, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000300, 0x00000000,
+ 0x40000000, 0x01c180c2, 0x00000400, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000082, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00ab0085, 0x08000000, 0x00000400, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000400, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000400, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000400, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000400, 0x00000000,
+ 0x01ab0083, 0x0ca00000, 0x0000050f, 0x00000000,
+ 0x01ab0083, 0x0ca00000, 0x0000050f, 0x00000000,
+ 0x02ab848a, 0x08000000, 0x00000500, 0x00000000,
+ 0x00ab8f8e, 0x04000000, 0x00000500, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000500, 0x00000000,
+ 0x00ab8f8e, 0x04000000, 0x00000500, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000500, 0x00000000,
+ 0x04ab8e84, 0x0c000000, 0x00000500, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000500, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000500, 0x00000000,
+ 0x02ab848f, 0x08000000, 0x00000500, 0x00000000,
+ 0x02ab0084, 0x08000000, 0x00000500, 0x00000000,
+ 0x00ab0000, 0x04000000, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00ab0000, 0x00000000, 0x00000500, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec025_data[] = {
+ 0x00000060, 0x00000090, 0x00000001, 0x00000000,
+ 0x00000050, 0x000000a0, 0x00000001, 0x00000000,
+ 0x000000a0, 0x00000050, 0x00000001, 0x00000000,
+ 0x00000800, 0x00000700, 0x00000001, 0x00000000,
+ 0x00000900, 0x00000600, 0x00000001, 0x00000000,
+ 0x00008000, 0x00007000, 0x00000001, 0x00000000,
+ 0x00009000, 0x00006000, 0x00000001, 0x00000000,
+ 0x0000a000, 0x00005000, 0x00000001, 0x00000000,
+ 0x000c0000, 0x00030000, 0x00000001, 0x00000000,
+ 0x000d0000, 0x00020000, 0x00000001, 0x00000000,
+ 0x000e0000, 0x00010000, 0x00000001, 0x00000000,
+ 0x00000040, 0x000000b0, 0x00000001, 0x00000000,
+ 0x00000070, 0x00000080, 0x00000001, 0x00000000,
+ 0x00000090, 0x00000060, 0x00000001, 0x00000000,
+ 0x00000080, 0x00000070, 0x00000001, 0x00000000,
+ 0x00000700, 0x00000800, 0x00000001, 0x00000000,
+ 0x00007000, 0x00008000, 0x00000001, 0x00000000,
+ 0x00080000, 0x00070000, 0x00000001, 0x00000000,
+ 0x00000c00, 0x00000300, 0x00000001, 0x00000000,
+ 0x00000d00, 0x00000200, 0x00000001, 0x00000000,
+ 0x00400000, 0x00b00000, 0x00000001, 0x00000000,
+ 0x00600000, 0x00900000, 0x00000001, 0x00000000,
+ 0x00300000, 0x00c00000, 0x00000001, 0x00000000,
+ 0x00500000, 0x00a00000, 0x00000001, 0x00000000,
+ 0x00700000, 0x00800000, 0x00000001, 0x00000000,
+ 0x00000000, 0x00f00000, 0x00000001, 0x00000000,
+ 0x00000000, 0x00f00000, 0x00000001, 0x00000000,
+ 0x00100000, 0x00e00000, 0x00000001, 0x00000000,
+ 0x00200000, 0x00d00000, 0x00000001, 0x00000000,
+ 0x00800000, 0x00700000, 0x00000001, 0x00000000,
+ 0x00900000, 0x00600000, 0x00000001, 0x00000000,
+ 0x00a00000, 0x00500000, 0x00000001, 0x00000000,
+ 0x00b00000, 0x00400000, 0x00000001, 0x00000000,
+ 0x000f0000, 0x00000000, 0x00000001, 0x00000000,
+ 0x00f00000, 0x00000000, 0x00000001, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec026_data[] = {
+ 0x0000000a, 0x0000000a, 0x0000000a, 0x0000000a,
+ 0x0000000a, 0x0000000a, 0x0000000a, 0x0000000a,
+ 0x0000000a, 0x0000000a, 0x0000000a, 0x00000000,
+ 0x0000000b, 0x00000008, 0x00000009, 0x0000000f,
+ 0x0000000f, 0x0000000f, 0x0000000f, 0x0000000f,
+ 0x0000000c, 0x0000000d, 0x00000001, 0x00000001,
+ 0x0000000e, 0x00000005, 0x00000002, 0x00000002,
+ 0x00000004, 0x00000003, 0x00000003, 0x00000003,
+ 0x00000003, 0x0000000a, 0x0000000a, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec027_data[] = {
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080634, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080730, 0x00080834, 0x0008082e,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00080020, 0x00080228, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00080224,
+ 0x00080326, 0x00080730, 0x00080932, 0x00080a34,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0009061c, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x0009033a,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00090200, 0x00090304, 0x00090408, 0x0009050c,
+ 0x00090610, 0x00090714, 0x00090818, 0x0009121c,
+ 0x0009131e, 0x00000000, 0x00000000, 0x00000000,
+ 0x0009063d, 0x00090740, 0x000d803f, 0x000d413f,
+ 0x0009030c, 0x0009041c, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0009013f, 0x00090840, 0x000dc93d, 0x000d093d,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a003c, 0x000a0037, 0x000ec139, 0x000e0139,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x000a0742, 0x00000000, 0x00000000,
+ 0x000a0d41, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x000a0036,
+ 0x000a0138, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0d3e, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000a0037, 0x000a0139, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec028_data[] = {
+ 0x00000006, 0x00000001, 0x00000004, 0x00000001,
+ 0x00000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000040, 0x00000001, 0x00000010, 0x00000001,
+ 0x00000000, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00500000, 0x00000001, 0x00700000, 0x00000001,
+ 0x00a00000, 0x00000001, 0x00b00000, 0x00000001,
+ 0x00200000, 0x00000001, 0x00000000, 0x00000001,
+ 0x00300000, 0x00000001, 0x00800000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec029_data[] = {
+ 0xfffffff0, 0xfffffff1, 0xfffffff3, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffff0f, 0xffffff0f, 0xffffff0f, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xff0fffff, 0xff0fffff, 0xff0fffff, 0xff0fffff,
+ 0xff0fffff, 0xff1fffff, 0xff0fffff, 0xff1fffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+ 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff,
+};
+
+static u32 nbl_sec030_data[] = {
+ 0x00000000, 0x00000001, 0x00000002, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000001, 0x00000002, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000001, 0x00000001, 0x00000001,
+ 0x00000002, 0x00000003, 0x00000004, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec039_data[] = {
+ 0xfef80000, 0x00000002, 0x000002e0, 0x00000000,
+ 0xfef8013e, 0x00000002, 0x000002e0, 0x00000000,
+ 0x6660013e, 0x726e6802, 0x02224e42, 0x00000000,
+ 0x6660013e, 0x726e6802, 0x02224e42, 0x00000000,
+ 0x66600000, 0x726e6802, 0x02224e42, 0x00000000,
+ 0x66600000, 0x726e6802, 0x02224e42, 0x00000000,
+ 0x66600000, 0x00026802, 0x02224e40, 0x00000000,
+ 0x66627800, 0x00026802, 0x02224e40, 0x00000000,
+ 0x66600000, 0x00026a76, 0x02224e40, 0x00000000,
+ 0x66600000, 0x00026802, 0x00024e40, 0x00000000,
+ 0x66600000, 0x00026802, 0x00024e40, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec040_data[] = {
+ 0x0040fb3f, 0x00000001, 0x0440fb3f, 0x00000001,
+ 0x0502fa00, 0x00000001, 0x0602f900, 0x00000001,
+ 0x0903e600, 0x00000001, 0x0a03e500, 0x00000001,
+ 0x1101e600, 0x00000001, 0x1201e500, 0x00000001,
+ 0x0000ff00, 0x00000001, 0x0008ff07, 0x00000001,
+ 0x00ffff00, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec046_4p_data[] = {
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xa0000000, 0x00077c2b, 0x005c0000,
+ 0x00000000, 0x00008100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0x00073029, 0x00480000,
+ 0x00000000, 0x00008100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0x00073029, 0x00480000,
+ 0x70000000, 0x00000020, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xa0000000, 0x00000009, 0x00000000,
+ 0x00000000, 0x00002100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00000009, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x70000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x70000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x38430000,
+ 0x70000006, 0x00000020, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x98cb1180, 0x6e36d469,
+ 0x9d8eb91c, 0x87e3ef47, 0xa2931288, 0x08405c5a,
+ 0x73865086, 0x00000080, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x000b3849, 0x38430000,
+ 0x00000006, 0x0000c100, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00133889, 0x08400000,
+ 0x03865086, 0x4c016100, 0x00000014, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec047_data[] = {
+ 0x2040dc3f, 0x00000001, 0x2000dcff, 0x00000001,
+ 0x2200dcff, 0x00000001, 0x0008dc01, 0x00000001,
+ 0x0001de00, 0x00000001, 0x2900c4ff, 0x00000001,
+ 0x3100c4ff, 0x00000001, 0x2b00c4ff, 0x00000001,
+ 0x3300c4ff, 0x00000001, 0x2700d8ff, 0x00000001,
+ 0x2300d8ff, 0x00000001, 0x2502d800, 0x00000001,
+ 0x2102d800, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec052_data[] = {
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x30000000, 0x000b844c, 0xc8580000,
+ 0x00000006, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0xb0d3668b, 0xb0555e12,
+ 0x03b055c6, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0xa64b3449, 0x405a3cc1,
+ 0x00000006, 0x3d2d3300, 0x00000010, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x20000000, 0x26473429, 0x00482cc1,
+ 0x00000000, 0x00ccd300, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec053_data[] = {
+ 0x0840f03f, 0x00000001, 0x0040f03f, 0x00000001,
+ 0x0140fa3f, 0x00000001, 0x0100fa0f, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec058_data[] = {
+ 0x00000000, 0x00000000, 0x59f89400, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00470000,
+ 0x00000000, 0x3c000000, 0xa2e40006, 0x00000017,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x19fa1400, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x28440000,
+ 0x038e5186, 0x3c000000, 0xa8e40012, 0x00000047,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x0001f3d0, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00133889, 0x38c30000,
+ 0x0000000a, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x0001f3d0, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00133889, 0x38c30000,
+ 0x0000000a, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x000113d0, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00073829, 0x00430000,
+ 0x00000000, 0x3c000000, 0x0000000a, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x000293d0, 0x00000000,
+ 0x00000000, 0xb0000000, 0x00133889, 0x08400000,
+ 0x03865086, 0x3c000000, 0x00000016, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec059_data[] = {
+ 0x0200e4ff, 0x00000001, 0x0400e2ff, 0x00000001,
+ 0x1300ecff, 0x00000001, 0x1500eaff, 0x00000001,
+ 0x0300e4ff, 0x00000001, 0x0500e2ff, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec062_data[] = {
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec063_data[] = {
+ 0x0500e2ff, 0x00000001, 0x0900e2ff, 0x00000001,
+ 0x1900e2ff, 0x00000001, 0x1100e2ff, 0x00000001,
+ 0x0100e2ff, 0x00000001, 0x0600e1ff, 0x00000001,
+ 0x0a00e1ff, 0x00000001, 0x1a00e1ff, 0x00000001,
+ 0x1200e1ff, 0x00000001, 0x0200e1ff, 0x00000001,
+ 0x0000fcff, 0x00000001, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec065_data[] = {
+ 0x006e120c, 0x006e1210, 0x006e4208, 0x006e4218,
+ 0x00200b02, 0x00200b00, 0x000e1900, 0x000e1906,
+ 0x00580208, 0x00580204, 0x004c0208, 0x004c0207,
+ 0x0002110c, 0x0002110c, 0x0012010c, 0x00100110,
+ 0x0010010c, 0x000a010c, 0x0008010c, 0x00060000,
+ 0x00160000, 0x00140000, 0x001e0000, 0x001e0000,
+ 0x001e0000, 0x001e0000, 0x001e0000, 0x001e0000,
+ 0x001e0000, 0x001e0000, 0x001e0000, 0x001e0000,
+};
+
+static u32 nbl_sec066_data[] = {
+ 0x006e120c, 0x006e1210, 0x006e4208, 0x006e4218,
+ 0x00200b02, 0x00200b00, 0x000e1900, 0x000e1906,
+ 0x00580208, 0x00580204, 0x004c0208, 0x004c0207,
+ 0x0002110c, 0x0002110c, 0x0012010c, 0x00100110,
+ 0x0010010c, 0x000a010c, 0x0008010c, 0x00060000,
+ 0x00160000, 0x00140000, 0x001e0000, 0x001e0000,
+ 0x001e0000, 0x001e0000, 0x001e0000, 0x001e0000,
+ 0x001e0000, 0x001e0000, 0x001e0000, 0x001e0000,
+};
+
+static u32 nbl_sec071_4p_data[] = {
+ 0x00000000, 0x00000000, 0x00113d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7029b00, 0x00000000,
+ 0x00000000, 0x43000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x51e00000, 0x00000c9c,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00293d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x67089b00, 0x00000002,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x80000000, 0x00000000, 0xb1e00000, 0x0000189c,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00213d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7069b00, 0x00000001,
+ 0x00000000, 0x43000000, 0x014b0c70, 0x00000000,
+ 0x00000000, 0x00000000, 0x92600000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00213d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe7069b00, 0x00000001,
+ 0x00000000, 0x43000000, 0x015b0c70, 0x00000000,
+ 0x00000000, 0x00000000, 0x92600000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00553d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe6d29a00, 0x000149c4,
+ 0x00000000, 0x4b000000, 0x00000004, 0x00000000,
+ 0x80000000, 0x00022200, 0x62600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00553d00, 0x00000000,
+ 0x00000000, 0x00000000, 0xe6d2c000, 0x000149c4,
+ 0x00000000, 0x5b000000, 0x00000004, 0x00000000,
+ 0x80000000, 0x00022200, 0x62600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x006d3d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x64d49200, 0x5e556945,
+ 0xc666d89a, 0x4b0001a9, 0x00004c84, 0x00000000,
+ 0x80000000, 0x00022200, 0xc2600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x006d3d00, 0x00000000,
+ 0x00000000, 0x00000000, 0x6ed4ba00, 0x5ef56bc5,
+ 0xc666d8c0, 0x5b0001a9, 0x00004dc4, 0x00000000,
+ 0x80000000, 0x00022200, 0xc2600000, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000002, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00700000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec072_data[] = {
+ 0x84006aff, 0x00000001, 0x880066ff, 0x00000001,
+ 0x140040ff, 0x00000001, 0x70000cff, 0x00000001,
+ 0x180040ff, 0x00000001, 0x30000cff, 0x00000001,
+ 0x10004cff, 0x00000001, 0x30004cff, 0x00000001,
+ 0x0100ecff, 0x00000001, 0x0300ecff, 0x00000001,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec116_data[] = {
+ 0x00000000, 0x00000000, 0x3fff8000, 0x00000007,
+ 0x3fff8000, 0x00000007, 0x3fff8000, 0x00000007,
+ 0x3fff8000, 0x00000003, 0x3fff8000, 0x00000003,
+ 0x3fff8000, 0x00000007, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec124_data[] = {
+ 0xfffffffc, 0xffffffff, 0x00300000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000500, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x00300010, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000500, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x00300010, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000500, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x00300fff, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000580, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x00301fff, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000580, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x0030ffff, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000580, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x0030ffff, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000580, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x0030ffff, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000580, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x0030ffff, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000580, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0xffffffff, 0x00300000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000500, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0000fffe, 0x00000000, 0x00300000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000480, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0x00ffffff, 0x00300000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000480, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffe, 0x0000000f, 0x00300000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000580, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec125_data[] = {
+ 0xfffffffc, 0x01ffffff, 0x00300000, 0x70000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000480, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffe, 0x00000001, 0x00300000, 0x70000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000540, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffe, 0x011003ff, 0x00300000, 0x70000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000005c0, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0x103fffff, 0x00300001, 0x70000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000480, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec126_data[] = {
+ 0xfffffffc, 0xffffffff, 0x00300001, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000500, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffe, 0x000001ff, 0x00300000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x000005c0, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00002013, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000400, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00002013, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000400, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffc, 0x01ffffff, 0x00300000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000480, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0xfffffffe, 0x00000001, 0x00300000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000540, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static u32 nbl_sec137_data[] = {
+ 0x0000017a, 0x000000f2, 0x00000076, 0x0000017a,
+ 0x0000017a, 0x00000080, 0x00000024, 0x0000017a,
+ 0x0000017a, 0x00000191, 0x00000035, 0x0000017a,
+ 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a,
+ 0x0000017a, 0x000000d2, 0x00000066, 0x0000017a,
+ 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a,
+ 0x0000017a, 0x000000f2, 0x00000076, 0x0000017a,
+ 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a,
+};
+
+static u32 nbl_sec138_data[] = {
+ 0x0000017a, 0x000000f2, 0x00000076, 0x0000017a,
+ 0x0000017a, 0x00000080, 0x00000024, 0x0000017a,
+ 0x0000017a, 0x00000191, 0x00000035, 0x0000017a,
+ 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a,
+ 0x0000017a, 0x000000d2, 0x00000066, 0x0000017a,
+ 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a,
+ 0x0000017a, 0x000000f2, 0x00000076, 0x0000017a,
+ 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a,
+};
+
+void nbl_write_all_regs(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ u32 *nbl_sec046_data;
+ u32 *nbl_sec071_data;
+ u8 eth_mode = NBL_COMMON_TO_ETH_MODE(common);
+ u32 i = 0;
+
+ switch (eth_mode) {
+ case 1:
+ nbl_sec046_data = nbl_sec046_1p_data;
+ nbl_sec071_data = nbl_sec071_1p_data;
+ break;
+ case 2:
+ nbl_sec046_data = nbl_sec046_2p_data;
+ nbl_sec071_data = nbl_sec071_2p_data;
+ break;
+ case 4:
+ nbl_sec046_data = nbl_sec046_4p_data;
+ nbl_sec071_data = nbl_sec071_4p_data;
+ break;
+ default:
+ nbl_sec046_data = nbl_sec046_2p_data;
+ nbl_sec071_data = nbl_sec071_2p_data;
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC006_SIZE; i++) {
+ if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0)
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ nbl_hw_wr32(hw_mgt, NBL_SEC006_REGI(i), nbl_sec006_data[i]);
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC007_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC007_REGI(i), nbl_sec007_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC008_SIZE; i++) {
+ if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0)
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ nbl_hw_wr32(hw_mgt, NBL_SEC008_REGI(i), nbl_sec008_data[i]);
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC009_SIZE; i++) {
+ if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0)
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ nbl_hw_wr32(hw_mgt, NBL_SEC009_REGI(i), nbl_sec009_data[i]);
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC010_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC010_REGI(i), nbl_sec010_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC011_SIZE; i++) {
+ if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0)
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ nbl_hw_wr32(hw_mgt, NBL_SEC011_REGI(i), nbl_sec011_data[i]);
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC012_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC012_REGI(i), nbl_sec012_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC013_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC013_REGI(i), nbl_sec013_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC014_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC014_REGI(i), nbl_sec014_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC022_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC022_REGI(i), nbl_sec022_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC023_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC023_REGI(i), nbl_sec023_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC024_SIZE; i++) {
+ if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0)
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ nbl_hw_wr32(hw_mgt, NBL_SEC024_REGI(i), nbl_sec024_data[i]);
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC025_SIZE; i++) {
+ if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0)
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ nbl_hw_wr32(hw_mgt, NBL_SEC025_REGI(i), nbl_sec025_data[i]);
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC026_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC026_REGI(i), nbl_sec026_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC027_SIZE; i++) {
+ if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0)
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ nbl_hw_wr32(hw_mgt, NBL_SEC027_REGI(i), nbl_sec027_data[i]);
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC028_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC028_REGI(i), nbl_sec028_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC029_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC029_REGI(i), nbl_sec029_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC030_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC030_REGI(i), nbl_sec030_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC039_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC039_REGI(i), nbl_sec039_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC040_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC040_REGI(i), nbl_sec040_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC046_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC046_REGI(i), nbl_sec046_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC047_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC047_REGI(i), nbl_sec047_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC052_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC052_REGI(i), nbl_sec052_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC053_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC053_REGI(i), nbl_sec053_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC058_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC058_REGI(i), nbl_sec058_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC059_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC059_REGI(i), nbl_sec059_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC062_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC062_REGI(i), nbl_sec062_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC063_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC063_REGI(i), nbl_sec063_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC065_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC065_REGI(i), nbl_sec065_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC066_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC066_REGI(i), nbl_sec066_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC071_SIZE; i++) {
+ if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0)
+ nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ nbl_hw_wr32(hw_mgt, NBL_SEC071_REGI(i), nbl_sec071_data[i]);
+ }
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC072_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC072_REGI(i), nbl_sec072_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC116_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC116_REGI(i), nbl_sec116_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC124_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC124_REGI(i), nbl_sec124_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC125_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC125_REGI(i), nbl_sec125_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC126_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC126_REGI(i), nbl_sec126_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC137_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC137_REGI(i), nbl_sec137_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ for (i = 0; i < NBL_SEC138_SIZE; i++)
+ nbl_hw_wr32(hw_mgt, NBL_SEC138_REGI(i), nbl_sec138_data[i]);
+
+ nbl_flush_writes(hw_mgt);
+ nbl_hw_wr32(hw_mgt, NBL_SEC000_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC001_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC002_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC003_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC004_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC005_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC015_ADDR, 0x000f0908);
+ nbl_hw_wr32(hw_mgt, NBL_SEC016_ADDR, 0x10110607);
+ nbl_hw_wr32(hw_mgt, NBL_SEC017_ADDR, 0x383a3032);
+ nbl_hw_wr32(hw_mgt, NBL_SEC018_ADDR, 0x0201453f);
+ nbl_hw_wr32(hw_mgt, NBL_SEC019_ADDR, 0x00000a41);
+ nbl_hw_wr32(hw_mgt, NBL_SEC020_ADDR, 0x000000c8);
+ nbl_hw_wr32(hw_mgt, NBL_SEC021_ADDR, 0x00000400);
+ nbl_hw_wr32(hw_mgt, NBL_SEC031_ADDR, 0x000f0908);
+ nbl_hw_wr32(hw_mgt, NBL_SEC032_ADDR, 0x00001011);
+ nbl_hw_wr32(hw_mgt, NBL_SEC033_ADDR, 0x00003032);
+ nbl_hw_wr32(hw_mgt, NBL_SEC034_ADDR, 0x0201003f);
+ nbl_hw_wr32(hw_mgt, NBL_SEC035_ADDR, 0x0000000a);
+ nbl_hw_wr32(hw_mgt, NBL_SEC036_ADDR, 0x00001701);
+ nbl_hw_wr32(hw_mgt, NBL_SEC037_ADDR, 0x009238a1);
+ nbl_hw_wr32(hw_mgt, NBL_SEC038_ADDR, 0x0000002e);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(0), 0x00000200);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(1), 0x00000300);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(2), 0x00000105);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(3), 0x00000106);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(4), 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(5), 0x0000000a);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(6), 0x00000041);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(7), 0x00000082);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(8), 0x00000020);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(9), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(10), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(11), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(12), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(13), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(14), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(15), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC042_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC043_ADDR, 0x00000002);
+ nbl_hw_wr32(hw_mgt, NBL_SEC044_ADDR, 0x28212000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC045_ADDR, 0x00002b29);
+ nbl_hw_wr32(hw_mgt, NBL_SEC048_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC049_ADDR, 0x00000002);
+ nbl_hw_wr32(hw_mgt, NBL_SEC050_ADDR, 0x352b2000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC051_ADDR, 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC054_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC055_ADDR, 0x00000002);
+ nbl_hw_wr32(hw_mgt, NBL_SEC056_ADDR, 0x2b222100);
+ nbl_hw_wr32(hw_mgt, NBL_SEC057_ADDR, 0x00000038);
+ nbl_hw_wr32(hw_mgt, NBL_SEC060_ADDR, 0x24232221);
+ nbl_hw_wr32(hw_mgt, NBL_SEC061_ADDR, 0x0000002e);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(0), 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(1), 0x00000005);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(2), 0x00000011);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(3), 0x00000005);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(4), 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(5), 0x0000000a);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(6), 0x00000006);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(7), 0x00000012);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(8), 0x00000006);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(9), 0x00000002);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(10), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(11), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(12), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(13), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(14), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(15), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC067_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC068_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC069_ADDR, 0x22212000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC070_ADDR, 0x3835322b);
+ nbl_hw_wr32(hw_mgt, NBL_SEC073_ADDR, 0x0316a5ff);
+ nbl_hw_wr32(hw_mgt, NBL_SEC074_ADDR, 0x0316a5ff);
+ nbl_hw_wr32(hw_mgt, NBL_SEC075_REGI(0), 0x08802080);
+ nbl_hw_wr32(hw_mgt, NBL_SEC075_REGI(1), 0x12a05080);
+ nbl_hw_wr32(hw_mgt, NBL_SEC075_REGI(2), 0xffffffff);
+ nbl_hw_wr32(hw_mgt, NBL_SEC075_REGI(3), 0xffffffff);
+ nbl_hw_wr32(hw_mgt, NBL_SEC076_REGI(0), 0x08802080);
+ nbl_hw_wr32(hw_mgt, NBL_SEC076_REGI(1), 0x12a05080);
+ nbl_hw_wr32(hw_mgt, NBL_SEC076_REGI(2), 0xffffffff);
+ nbl_hw_wr32(hw_mgt, NBL_SEC076_REGI(3), 0xffffffff);
+ nbl_hw_wr32(hw_mgt, NBL_SEC077_REGI(0), 0x08802080);
+ nbl_hw_wr32(hw_mgt, NBL_SEC077_REGI(1), 0x12a05080);
+ nbl_hw_wr32(hw_mgt, NBL_SEC077_REGI(2), 0xffffffff);
+ nbl_hw_wr32(hw_mgt, NBL_SEC077_REGI(3), 0xffffffff);
+ nbl_hw_wr32(hw_mgt, NBL_SEC078_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC079_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC080_ADDR, 0x0014a248);
+ nbl_hw_wr32(hw_mgt, NBL_SEC081_ADDR, 0x00000d33);
+ nbl_hw_wr32(hw_mgt, NBL_SEC082_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC083_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC084_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC085_ADDR, 0x000144d2);
+ nbl_hw_wr32(hw_mgt, NBL_SEC086_ADDR, 0x31322e2f);
+ nbl_hw_wr32(hw_mgt, NBL_SEC087_ADDR, 0x0a092d2c);
+ nbl_hw_wr32(hw_mgt, NBL_SEC088_ADDR, 0x33050804);
+ nbl_hw_wr32(hw_mgt, NBL_SEC089_ADDR, 0x14131535);
+ nbl_hw_wr32(hw_mgt, NBL_SEC090_ADDR, 0x0000000a);
+ nbl_hw_wr32(hw_mgt, NBL_SEC091_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC092_ADDR, 0x00000008);
+ nbl_hw_wr32(hw_mgt, NBL_SEC093_ADDR, 0x0000000e);
+ nbl_hw_wr32(hw_mgt, NBL_SEC094_ADDR, 0x0000000f);
+ nbl_hw_wr32(hw_mgt, NBL_SEC095_ADDR, 0x00000015);
+ nbl_hw_wr32(hw_mgt, NBL_SEC096_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC097_ADDR, 0x0000000a);
+ nbl_hw_wr32(hw_mgt, NBL_SEC098_ADDR, 0x00000008);
+ nbl_hw_wr32(hw_mgt, NBL_SEC099_ADDR, 0x00000011);
+ nbl_hw_wr32(hw_mgt, NBL_SEC100_ADDR, 0x00000013);
+ nbl_hw_wr32(hw_mgt, NBL_SEC101_ADDR, 0x00000014);
+ nbl_hw_wr32(hw_mgt, NBL_SEC102_ADDR, 0x00000010);
+ nbl_hw_wr32(hw_mgt, NBL_SEC103_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC104_ADDR, 0x0000004d);
+ nbl_hw_wr32(hw_mgt, NBL_SEC105_ADDR, 0x08020a09);
+ nbl_hw_wr32(hw_mgt, NBL_SEC106_ADDR, 0x00000005);
+ nbl_hw_wr32(hw_mgt, NBL_SEC107_ADDR, 0x00000006);
+ nbl_hw_wr32(hw_mgt, NBL_SEC108_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC109_ADDR, 0x00110a09);
+ nbl_hw_wr32(hw_mgt, NBL_SEC110_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC111_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC112_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC113_ADDR, 0x0000000a);
+ nbl_hw_wr32(hw_mgt, NBL_SEC114_ADDR, 0x0000000a);
+ nbl_hw_wr32(hw_mgt, NBL_SEC115_ADDR, 0x00000009);
+ nbl_hw_wr32(hw_mgt, NBL_SEC117_ADDR, 0x0000000a);
+ nbl_hw_wr32(hw_mgt, NBL_SEC118_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(0), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(1), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(2), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(3), 0x00000000);
+ nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(4), 0x00000100);
+ nbl_hw_wr32(hw_mgt, NBL_SEC120_ADDR, 0x0000003c);
+ nbl_hw_wr32(hw_mgt, NBL_SEC121_ADDR, 0x00000003);
+ nbl_hw_wr32(hw_mgt, NBL_SEC122_ADDR, 0x000000bc);
+ nbl_hw_wr32(hw_mgt, NBL_SEC123_ADDR, 0x0000023b);
+ nbl_hw_wr32(hw_mgt, NBL_SEC127_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC128_ADDR, 0x00000001);
+ nbl_hw_wr32(hw_mgt, NBL_SEC129_ADDR, 0x00000002);
+ nbl_hw_wr32(hw_mgt, NBL_SEC130_ADDR, 0x00000002);
+ nbl_hw_wr32(hw_mgt, NBL_SEC131_ADDR, 0x00000003);
+ nbl_hw_wr32(hw_mgt, NBL_SEC132_ADDR, 0x00000003);
+ nbl_hw_wr32(hw_mgt, NBL_SEC133_ADDR, 0x00000004);
+ nbl_hw_wr32(hw_mgt, NBL_SEC134_ADDR, 0x00000004);
+ nbl_hw_wr32(hw_mgt, NBL_SEC135_ADDR, 0x0000000e);
+ nbl_hw_wr32(hw_mgt, NBL_SEC136_ADDR, 0x0000000e);
+ nbl_flush_writes(hw_mgt);
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h
new file mode 100644
index 000000000000..187f7557cc9e
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_HW_LEONIS_REGS_H_
+#define _NBL_HW_LEONIS_REGS_H_
+
+void nbl_write_all_regs(void *priv);
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
index f42be0e4e701..9fdc1429a4ad 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
@@ -13,6 +13,344 @@
#define NBL_CHAN_OPS_TBL_TO_OPS(chan_ops_tbl) ((chan_ops_tbl)->ops)
#define NBL_CHAN_OPS_TBL_TO_PRIV(chan_ops_tbl) ((chan_ops_tbl)->priv)
+enum nbl_chan_msg_type {
+ NBL_CHAN_MSG_ACK,
+ NBL_CHAN_MSG_ADD_MACVLAN,
+ NBL_CHAN_MSG_DEL_MACVLAN,
+ NBL_CHAN_MSG_ADD_MULTI_RULE,
+ NBL_CHAN_MSG_DEL_MULTI_RULE,
+ NBL_CHAN_MSG_SETUP_MULTI_GROUP,
+ NBL_CHAN_MSG_REMOVE_MULTI_GROUP,
+ NBL_CHAN_MSG_REGISTER_NET,
+ NBL_CHAN_MSG_UNREGISTER_NET,
+ NBL_CHAN_MSG_ALLOC_TXRX_QUEUES,
+ NBL_CHAN_MSG_FREE_TXRX_QUEUES,
+ NBL_CHAN_MSG_SETUP_QUEUE,
+ NBL_CHAN_MSG_REMOVE_ALL_QUEUES,
+ NBL_CHAN_MSG_CFG_DSCH,
+ NBL_CHAN_MSG_SETUP_CQS,
+ NBL_CHAN_MSG_REMOVE_CQS,
+ NBL_CHAN_MSG_CFG_QDISC_MQPRIO,
+ NBL_CHAN_MSG_CONFIGURE_MSIX_MAP,
+ NBL_CHAN_MSG_DESTROY_MSIX_MAP,
+ NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ,
+ NBL_CHAN_MSG_GET_GLOBAL_VECTOR,
+ NBL_CHAN_MSG_GET_VSI_ID,
+ NBL_CHAN_MSG_SET_PROSISC_MODE,
+ NBL_CHAN_MSG_GET_FIRMWARE_VERSION,
+ NBL_CHAN_MSG_GET_QUEUE_ERR_STATS,
+ NBL_CHAN_MSG_GET_COALESCE,
+ NBL_CHAN_MSG_SET_COALESCE,
+ NBL_CHAN_MSG_SET_SPOOF_CHECK_ADDR,
+ NBL_CHAN_MSG_SET_VF_SPOOF_CHECK,
+ NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE,
+ NBL_CHAN_MSG_GET_RXFH_INDIR,
+ NBL_CHAN_MSG_GET_RXFH_RSS_KEY,
+ NBL_CHAN_MSG_GET_RXFH_RSS_ALG_SEL,
+ NBL_CHAN_MSG_GET_HW_CAPS,
+ NBL_CHAN_MSG_GET_HW_STATE,
+ NBL_CHAN_MSG_REGISTER_RDMA,
+ NBL_CHAN_MSG_UNREGISTER_RDMA,
+ NBL_CHAN_MSG_GET_REAL_HW_ADDR,
+ NBL_CHAN_MSG_GET_REAL_BDF,
+ NBL_CHAN_MSG_GRC_PROCESS,
+ NBL_CHAN_MSG_SET_SFP_STATE,
+ NBL_CHAN_MSG_SET_ETH_LOOPBACK,
+ NBL_CHAN_MSG_CHECK_ACTIVE_VF,
+ NBL_CHAN_MSG_GET_PRODUCT_FLEX_CAP,
+ NBL_CHAN_MSG_ALLOC_KTLS_TX_INDEX,
+ NBL_CHAN_MSG_FREE_KTLS_TX_INDEX,
+ NBL_CHAN_MSG_CFG_KTLS_TX_KEYMAT,
+ NBL_CHAN_MSG_ALLOC_KTLS_RX_INDEX,
+ NBL_CHAN_MSG_FREE_KTLS_RX_INDEX,
+ NBL_CHAN_MSG_CFG_KTLS_RX_KEYMAT,
+ NBL_CHAN_MSG_CFG_KTLS_RX_RECORD,
+ NBL_CHAN_MSG_ADD_KTLS_RX_FLOW,
+ NBL_CHAN_MSG_DEL_KTLS_RX_FLOW,
+ NBL_CHAN_MSG_ALLOC_IPSEC_TX_INDEX,
+ NBL_CHAN_MSG_FREE_IPSEC_TX_INDEX,
+ NBL_CHAN_MSG_ALLOC_IPSEC_RX_INDEX,
+ NBL_CHAN_MSG_FREE_IPSEC_RX_INDEX,
+ NBL_CHAN_MSG_CFG_IPSEC_TX_SAD,
+ NBL_CHAN_MSG_CFG_IPSEC_RX_SAD,
+ NBL_CHAN_MSG_ADD_IPSEC_TX_FLOW,
+ NBL_CHAN_MSG_DEL_IPSEC_TX_FLOW,
+ NBL_CHAN_MSG_ADD_IPSEC_RX_FLOW,
+ NBL_CHAN_MSG_DEL_IPSEC_RX_FLOW,
+ NBL_CHAN_MSG_NOTIFY_IPSEC_HARD_EXPIRE,
+ NBL_CHAN_MSG_GET_MBX_IRQ_NUM,
+ NBL_CHAN_MSG_CLEAR_FLOW,
+ NBL_CHAN_MSG_CLEAR_QUEUE,
+ NBL_CHAN_MSG_GET_ETH_ID,
+ NBL_CHAN_MSG_SET_OFFLOAD_STATUS,
+ NBL_CHAN_MSG_INIT_OFLD,
+ NBL_CHAN_MSG_INIT_CMDQ,
+ NBL_CHAN_MSG_DESTROY_CMDQ,
+ NBL_CHAN_MSG_RESET_CMDQ,
+ NBL_CHAN_MSG_INIT_FLOW,
+ NBL_CHAN_MSG_DEINIT_FLOW,
+ NBL_CHAN_MSG_OFFLOAD_FLOW_RULE,
+ NBL_CHAN_MSG_GET_ACL_SWITCH,
+ NBL_CHAN_MSG_GET_VSI_GLOBAL_QUEUE_ID,
+ NBL_CHAN_MSG_INIT_REP,
+ NBL_CHAN_MSG_GET_LINE_RATE_INFO,
+ NBL_CHAN_MSG_REGISTER_NET_REP,
+ NBL_CHAN_MSG_UNREGISTER_NET_REP,
+ NBL_CHAN_MSG_REGISTER_ETH_REP,
+ NBL_CHAN_MSG_UNREGISTER_ETH_REP,
+ NBL_CHAN_MSG_REGISTER_UPCALL_PORT,
+ NBL_CHAN_MSG_UNREGISTER_UPCALL_PORT,
+ NBL_CHAN_MSG_GET_PORT_STATE,
+ NBL_CHAN_MSG_SET_PORT_ADVERTISING,
+ NBL_CHAN_MSG_GET_MODULE_INFO,
+ NBL_CHAN_MSG_GET_MODULE_EEPROM,
+ NBL_CHAN_MSG_GET_LINK_STATE,
+ NBL_CHAN_MSG_NOTIFY_LINK_STATE,
+ NBL_CHAN_MSG_GET_QUEUE_CXT,
+ NBL_CHAN_MSG_CFG_LOG,
+ NBL_CHAN_MSG_INIT_VDPAQ,
+ NBL_CHAN_MSG_DESTROY_VDPAQ,
+ NBL_CHAN_GET_UPCALL_PORT,
+ NBL_CHAN_MSG_NOTIFY_ETH_REP_LINK_STATE,
+ NBL_CHAN_MSG_SET_ETH_MAC_ADDR,
+ NBL_CHAN_MSG_GET_FUNCTION_ID,
+ NBL_CHAN_MSG_GET_CHIP_TEMPERATURE,
+ NBL_CHAN_MSG_DISABLE_HW_FLOW,
+ NBL_CHAN_MSG_ENABLE_HW_FLOW,
+ NBL_CHAN_MSG_SET_UPCALL_RULE,
+ NBL_CHAN_MSG_UNSET_UPCALL_RULE,
+ NBL_CHAN_MSG_GET_REG_DUMP,
+ NBL_CHAN_MSG_GET_REG_DUMP_LEN,
+ NBL_CHAN_MSG_CFG_LAG_HASH_ALGORITHM,
+ NBL_CHAN_MSG_CFG_LAG_MEMBER_FWD,
+ NBL_CHAN_MSG_CFG_LAG_MEMBER_LIST,
+ NBL_CHAN_MSG_CFG_LAG_MEMBER_UP_ATTR,
+ NBL_CHAN_MSG_ADD_LAG_FLOW,
+ NBL_CHAN_MSG_DEL_LAG_FLOW,
+ NBL_CHAN_MSG_SWITCHDEV_INIT_CMDQ,
+ NBL_CHAN_MSG_SWITCHDEV_DEINIT_CMDQ,
+ NBL_CHAN_MSG_SET_TC_FLOW_INFO,
+ NBL_CHAN_MSG_UNSET_TC_FLOW_INFO,
+ NBL_CHAN_MSG_INIT_ACL,
+ NBL_CHAN_MSG_UNINIT_ACL,
+ NBL_CHAN_MSG_CFG_LAG_MCC,
+ NBL_CHAN_MSG_REGISTER_VSI2Q,
+ NBL_CHAN_MSG_SETUP_Q2VSI,
+ NBL_CHAN_MSG_REMOVE_Q2VSI,
+ NBL_CHAN_MSG_SETUP_RSS,
+ NBL_CHAN_MSG_REMOVE_RSS,
+ NBL_CHAN_MSG_GET_REP_QUEUE_INFO,
+ NBL_CHAN_MSG_CTRL_PORT_LED,
+ NBL_CHAN_MSG_NWAY_RESET,
+ NBL_CHAN_MSG_SET_INTL_SUPPRESS_LEVEL,
+ NBL_CHAN_MSG_GET_ETH_STATS,
+ NBL_CHAN_MSG_GET_MODULE_TEMPERATURE,
+ NBL_CHAN_MSG_GET_BOARD_INFO,
+ NBL_CHAN_MSG_GET_P4_USED,
+ NBL_CHAN_MSG_GET_VF_BASE_VSI_ID,
+ NBL_CHAN_MSG_ADD_LLDP_FLOW,
+ NBL_CHAN_MSG_DEL_LLDP_FLOW,
+ NBL_CHAN_MSG_CFG_ETH_BOND_INFO,
+ NBL_CHAN_MSG_CFG_DUPPKT_MCC,
+ NBL_CHAN_MSG_ADD_ND_UPCALL_FLOW,
+ NBL_CHAN_MSG_DEL_ND_UPCALL_FLOW,
+ NBL_CHAN_MSG_GET_BOARD_ID,
+ NBL_CHAN_MSG_SET_SHAPING_DPORT_VLD,
+ NBL_CHAN_MSG_SET_DPORT_FC_TH_VLD,
+ NBL_CHAN_MSG_REGISTER_RDMA_BOND,
+ NBL_CHAN_MSG_UNREGISTER_RDMA_BOND,
+ NBL_CHAN_MSG_RESTORE_NETDEV_QUEUE,
+ NBL_CHAN_MSG_RESTART_NETDEV_QUEUE,
+ NBL_CHAN_MSG_RESTORE_HW_QUEUE,
+ NBL_CHAN_MSG_KEEP_ALIVE,
+ NBL_CHAN_MSG_GET_BASE_MAC_ADDR,
+ NBL_CHAN_MSG_CFG_BOND_SHAPING,
+ NBL_CHAN_MSG_CFG_BGID_BACK_PRESSURE,
+ NBL_CHAN_MSG_ALLOC_KT_BLOCK,
+ NBL_CHAN_MSG_FREE_KT_BLOCK,
+ NBL_CHAN_MSG_GET_USER_QUEUE_INFO,
+ NBL_CHAN_MSG_GET_ETH_BOND_INFO,
+ NBL_CHAN_MSG_CLEAR_ACCEL_FLOW,
+ NBL_CHAN_MSG_SET_BRIDGE_MODE,
+ NBL_CHAN_MSG_GET_VF_FUNCTION_ID,
+ NBL_CHAN_MSG_NOTIFY_LINK_FORCED,
+ NBL_CHAN_MSG_SET_PMD_DEBUG,
+ NBL_CHAN_MSG_REGISTER_FUNC_MAC,
+ NBL_CHAN_MSG_SET_TX_RATE,
+ NBL_CHAN_MSG_REGISTER_FUNC_LINK_FORCED,
+ NBL_CHAN_MSG_GET_LINK_FORCED,
+ NBL_CHAN_MSG_REGISTER_FUNC_VLAN,
+ NBL_CHAN_MSG_GET_FD_FLOW,
+ NBL_CHAN_MSG_GET_FD_FLOW_CNT,
+ NBL_CHAN_MSG_GET_FD_FLOW_ALL,
+ NBL_CHAN_MSG_GET_FD_FLOW_MAX,
+ NBL_CHAN_MSG_REPLACE_FD_FLOW,
+ NBL_CHAN_MSG_REMOVE_FD_FLOW,
+ NBL_CHAN_MSG_CFG_FD_FLOW_STATE,
+ NBL_CHAN_MSG_REGISTER_FUNC_RATE,
+ NBL_CHAN_MSG_NOTIFY_VLAN,
+ NBL_CHAN_MSG_GET_XDP_QUEUE_INFO,
+ NBL_CHAN_MSG_STOP_ABNORMAL_SW_QUEUE,
+ NBL_CHAN_MSG_STOP_ABNORMAL_HW_QUEUE,
+ NBL_CHAN_MSG_NOTIFY_RESET_EVENT,
+ NBL_CHAN_MSG_ACK_RESET_EVENT,
+ NBL_CHAN_MSG_GET_VF_VSI_ID,
+ NBL_CHAN_MSG_CONFIGURE_QOS,
+ NBL_CHAN_MSG_GET_PFC_BUFFER_SIZE,
+ NBL_CHAN_MSG_SET_PFC_BUFFER_SIZE,
+ NBL_CHAN_MSG_GET_VF_STATS,
+ NBL_CHAN_MSG_REGISTER_FUNC_TRUST,
+ NBL_CHAN_MSG_NOTIFY_TRUST,
+ NBL_CHAN_CHECK_VF_IS_ACTIVE,
+ NBL_CHAN_MSG_GET_ETH_ABNORMAL_STATS,
+ NBL_CHAN_MSG_GET_ETH_CTRL_STATS,
+ NBL_CHAN_MSG_GET_PAUSE_STATS,
+ NBL_CHAN_MSG_GET_ETH_MAC_STATS,
+ NBL_CHAN_MSG_GET_FEC_STATS,
+ NBL_CHAN_MSG_CFG_MULTI_MCAST_RULE,
+ NBL_CHAN_MSG_GET_LINK_DOWN_COUNT,
+ NBL_CHAN_MSG_GET_LINK_STATUS_OPCODE,
+ NBL_CHAN_MSG_GET_RMON_STATS,
+ NBL_CHAN_MSG_REGISTER_PF_NAME,
+ NBL_CHAN_MSG_GET_PF_NAME,
+ NBL_CHAN_MSG_CONFIGURE_RDMA_BW,
+ NBL_CHAN_MSG_SET_RATE_LIMIT,
+ NBL_CHAN_MSG_SET_TC_WGT,
+ NBL_CHAN_MSG_REMOVE_QUEUE,
+ NBL_CHAN_MSG_GET_MIRROR_TABLE_ID,
+ NBL_CHAN_MSG_CONFIGURE_MIRROR,
+ NBL_CHAN_MSG_CONFIGURE_MIRROR_TABLE,
+ NBL_CHAN_MSG_CLEAR_MIRROR_CFG,
+ NBL_CHAN_MSG_MIRROR_OUTPUTPORT_NOTIFY,
+ NBL_CHAN_MSG_CHECK_FLOWTABLE_SPEC,
+ NBL_CHAN_CHECK_VF_IS_VDPA,
+ NBL_CHAN_MSG_GET_VDPA_VF_STATS,
+ NBL_CHAN_MSG_SET_RX_RATE,
+ NBL_CHAN_GET_UVN_PKT_DROP_STATS,
+ NBL_CHAN_GET_USTORE_PKT_DROP_STATS,
+ NBL_CHAN_GET_USTORE_TOTAL_PKT_DROP_STATS,
+ NBL_CHAN_MSG_SET_WOL,
+ NBL_CHAN_MSG_INIT_VF_MSIX_MAP,
+ NBL_CHAN_MSG_GET_ST_NAME,
+ NBL_CHAN_MSG_MTU_SET = 501,
+ NBL_CHAN_MSG_SET_RXFH_INDIR = 506,
+ NBL_CHAN_MSG_SET_RXFH_RSS_ALG_SEL = 508,
+ /* mailbox msg end */
+ NBL_CHAN_MSG_MAILBOX_MAX,
+
+ /* adminq msg */
+ NBL_CHAN_MSG_ADMINQ_GET_EMP_VERSION =
+ 0x8101, /* Deprecated, should not be used */
+ NBL_CHAN_MSG_ADMINQ_GET_NVM_VERSION = 0x8102,
+ NBL_CHAN_MSG_ADMINQ_REBOOT = 0x8104,
+ NBL_CHAN_MSG_ADMINQ_FLR_NOTIFY = 0x8105,
+ NBL_CHAN_MSG_ADMINQ_NOTIFY_FW_RESET = 0x8106,
+ NBL_CHAN_MSG_ADMINQ_LOAD_P4 = 0x8107,
+ NBL_CHAN_MSG_ADMINQ_LOAD_P4_DEFAULT = 0x8108,
+ NBL_CHAN_MSG_ADMINQ_EXT_ALERT = 0x8109,
+ NBL_CHAN_MSG_ADMINQ_FLASH_ERASE = 0x8201,
+ NBL_CHAN_MSG_ADMINQ_FLASH_READ = 0x8202,
+ NBL_CHAN_MSG_ADMINQ_FLASH_WRITE = 0x8203,
+ NBL_CHAN_MSG_ADMINQ_FLASH_ACTIVATE = 0x8204,
+ NBL_CHAN_MSG_ADMINQ_RESOURCE_WRITE = 0x8205,
+ NBL_CHAN_MSG_ADMINQ_RESOURCE_READ = 0x8206,
+ NBL_CHAN_MSG_ADMINQ_REGISTER_WRITE = 0x8207,
+ NBL_CHAN_MSG_ADMINQ_REGISTER_READ = 0x8208,
+ NBL_CHAN_MSG_ADMINQ_GET_NVM_BANK_INDEX = 0x820B,
+ NBL_CHAN_MSG_ADMINQ_VERIFY_NVM_BANK = 0x820C,
+ NBL_CHAN_MSG_ADMINQ_FLASH_LOCK = 0x820D,
+ NBL_CHAN_MSG_ADMINQ_FLASH_UNLOCK = 0x820E,
+ NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES = 0x8300,
+ NBL_CHAN_MSG_ADMINQ_PORT_NOTIFY = 0x8301,
+ NBL_CHAN_MSG_ADMINQ_GET_MODULE_EEPROM = 0x8302,
+ NBL_CHAN_MSG_ADMINQ_GET_ETH_STATS = 0x8303,
+ NBL_CHAN_MSG_ADMINQ_GET_FEC_STATS = 0x8305,
+
+ NBL_CHAN_MSG_ADMINQ_EMP_CONSOLE_WRITE = 0x8F01,
+ NBL_CHAN_MSG_ADMINQ_EMP_CONSOLE_READ = 0x8F02,
+
+ NBL_CHAN_MSG_MAX,
+};
+
+struct nbl_chan_param_register_net_info {
+ u16 pf_bdf;
+ u64 vf_bar_start;
+ u64 vf_bar_size;
+ u16 total_vfs;
+ u16 offset;
+ u16 stride;
+ u64 pf_bar_start;
+ u16 is_vdpa;
+};
+
+struct nbl_chan_param_alloc_txrx_queues {
+ u16 vsi_id;
+ u16 queue_num;
+};
+
+struct nbl_chan_param_register_vsi2q {
+ u16 vsi_index;
+ u16 vsi_id;
+ u16 queue_offset;
+ u16 queue_num;
+};
+
+struct nbl_chan_param_setup_queue {
+ struct nbl_txrx_queue_param queue_param;
+ bool is_tx;
+};
+
+struct nbl_chan_param_cfg_dsch {
+ u16 vsi_id;
+ bool vld;
+};
+
+struct nbl_chan_param_setup_cqs {
+ u16 vsi_id;
+ u16 real_qps;
+ bool rss_indir_set;
+};
+
+struct nbl_chan_param_set_promisc_mode {
+ u16 vsi_id;
+ u16 mode;
+};
+
+struct nbl_chan_param_cfg_msix_map {
+ u16 num_net_msix;
+ u16 num_others_msix;
+ u16 msix_mask_en;
+};
+
+struct nbl_chan_param_enable_mailbox_irq {
+ u16 vector_id;
+ bool enable_msix;
+};
+
+struct nbl_chan_param_get_global_vector {
+ u16 vsi_id;
+ u16 vector_id;
+};
+
+struct nbl_chan_param_get_vsi_id {
+ u16 vsi_id;
+ u16 type;
+};
+
+struct nbl_chan_param_get_eth_id {
+ u16 vsi_id;
+ u8 eth_mode;
+ u8 eth_id;
+ u8 logic_eth_id;
+};
+
+struct nbl_chan_param_set_rxfh_indir {
+ u16 vsi_id;
+ u32 indir_size;
+#define NBL_RXFH_INDIR_MAX_SIZE (512)
+ u32 indir[NBL_RXFH_INDIR_MAX_SIZE];
+};
+
enum nbl_channel_type {
NBL_CHAN_TYPE_MAILBOX,
NBL_CHAN_TYPE_MAX
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index f58b4dbb0aab..7999671fcb26 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -31,4 +31,21 @@ struct nbl_init_param {
bool pci_using_dac;
};
+struct nbl_txrx_queue_param {
+ u16 vsi_id;
+ u64 dma;
+ u64 avail;
+ u64 used;
+ u16 desc_num;
+ u16 local_queue_id;
+ u16 intr_en;
+ u16 intr_mask;
+ u16 global_vec_id;
+ u16 half_offload_en;
+ u16 split;
+ u16 extend_header;
+ u16 cxt;
+ u16 rxcsum;
+};
+
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 04/15] net/nebula-matrix: add channel layer implementation and the corresponding hw interfaces
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (2 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 03/15] net/nebula-matrix: add machine-generated headers and chip definitions illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-27 18:15 ` [v3,net-next,04/15] " Simon Horman
2026-01-23 1:17 ` [PATCH v3 net-next 05/15] net/nebula-matrix: add resource layer common part implementation illusion.wang
` (11 subsequent siblings)
15 siblings, 1 reply; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
a channel management layer provides structured approach to handle
communication between different components and drivers. Here's a summary
of its key functionalities:
1. Message Handling Framework
Message Registration/Unregistration: Functions (nbl_chan_register_msg,
nbl_chan_unregister_msg) allow dynamic registration of message handlers
for specific message types, enabling extensible communication protocols.
Message Sending/Acknowledgment: Core functions (nbl_chan_send_msg,
nbl_chan_send_ack) handle message transmission, including asynchronous
operations with acknowledgment (ACK) support.
Received ACKs are processed via nbl_chan_recv_ack_msg.
Hash-Based Handler Lookup: A hash table (handle_hash_tbl) stores message
handlers for efficient O(1) lookup by message type.
2. Channel Types and Queue Management
Mailbox Channel: For direct communication between PF0 and Other PF.
Queue Initialization/Teardown: Functions (nbl_chan_init_queue,
nbl_chan_teardown_queue) manage transmit (TX) and receive (RX)
queues, including DMA buffer allocation/deallocation
(dmam_alloc_coherent, dmam_free_coherent).
Queue Configuration: Hardware-specific queue parameters (e.g., buffer
sizes, entry counts) are set via nbl_chan_config_queue, with hardware
interactions delegated to hw_ops.
3. Hardware Abstraction Layer (HW Ops)
Hardware-Specific Operations: The nbl_hw_ops structure abstracts
hardware interactions: queue configuration (config_mailbox_txq/rxq),
tail pointer updates(update_mailbox_queue_tail_ptr), and DMA error checks
(check_mailbox_dma_err).
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/Makefile | 3 +-
.../nbl/nbl_channel/nbl_channel.c | 1154 +++++++++++++++++
.../nbl/nbl_channel/nbl_channel.h | 132 ++
.../nebula-matrix/nbl/nbl_common/nbl_common.c | 302 +++++
.../nebula-matrix/nbl/nbl_common/nbl_common.h | 29 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 129 ++
.../nbl/nbl_include/nbl_def_channel.h | 111 ++
.../nbl/nbl_include/nbl_def_common.h | 74 ++
.../nbl/nbl_include/nbl_def_hw.h | 31 +
.../nbl/nbl_include/nbl_include.h | 15 +
10 files changed, 1979 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
index 4a492f89c5bc..db646a54877b 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -4,7 +4,8 @@
obj-$(CONFIG_NBL_CORE) := nbl_core.o
-nbl_core-objs += nbl_channel/nbl_channel.o \
+nbl_core-objs += nbl_common/nbl_common.o \
+ nbl_channel/nbl_channel.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \
nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c
index a262a765c141..0211bed53692 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c
@@ -6,7 +6,1137 @@
#include <linux/delay.h>
#include "nbl_channel.h"
+static int nbl_chan_send_ack(void *priv, struct nbl_chan_ack_info *chan_ack);
+
+static void nbl_chan_delete_msg_handler(struct nbl_channel_mgt *chan_mgt,
+ u16 msg_type)
+{
+ struct nbl_chan_info *chan_info;
+ u8 chan_type = NBL_CHAN_TYPE_MAILBOX;
+
+ nbl_common_free_hash_node(chan_mgt->handle_hash_tbl, &msg_type);
+
+ chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+ if (chan_info && chan_info->clean_task)
+ nbl_common_flush_task(chan_info->clean_task);
+}
+
+static int nbl_chan_add_msg_handler(struct nbl_channel_mgt *chan_mgt,
+ u16 msg_type, nbl_chan_resp func,
+ void *priv)
+{
+ struct nbl_chan_msg_node_data handler = { 0 };
+ int ret;
+
+ handler.func = func;
+ handler.priv = priv;
+
+ ret = nbl_common_alloc_hash_node(chan_mgt->handle_hash_tbl, &msg_type,
+ &handler, NULL);
+
+ return ret;
+}
+
+static int nbl_chan_init_msg_handler(struct nbl_channel_mgt *chan_mgt)
+{
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ struct nbl_hash_tbl_key tbl_key;
+ int ret = 0;
+
+ NBL_HASH_TBL_KEY_INIT(&tbl_key, NBL_COMMON_TO_DEV(common), sizeof(u16),
+ sizeof(struct nbl_chan_msg_node_data),
+ NBL_CHAN_HANDLER_TBL_BUCKET_SIZE, false);
+
+ chan_mgt->handle_hash_tbl = nbl_common_init_hash_table(&tbl_key);
+ if (!chan_mgt->handle_hash_tbl) {
+ ret = -ENOMEM;
+ goto alloc_hashtbl_failed;
+ }
+
+ return 0;
+
+alloc_hashtbl_failed:
+ return ret;
+}
+
+static void nbl_chan_remove_msg_handler(struct nbl_channel_mgt *chan_mgt)
+{
+ nbl_common_remove_hash_table(chan_mgt->handle_hash_tbl, NULL);
+
+ chan_mgt->handle_hash_tbl = NULL;
+}
+
+static void nbl_chan_init_queue_param(struct nbl_chan_info *chan_info,
+ u16 num_txq_entries, u16 num_rxq_entries,
+ u16 txq_buf_size, u16 rxq_buf_size)
+{
+ spin_lock_init(&chan_info->txq_lock);
+ chan_info->num_txq_entries = num_txq_entries;
+ chan_info->num_rxq_entries = num_rxq_entries;
+ chan_info->txq_buf_size = txq_buf_size;
+ chan_info->rxq_buf_size = rxq_buf_size;
+}
+
+static int nbl_chan_init_tx_queue(struct nbl_common_info *common,
+ struct nbl_chan_info *chan_info)
+{
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(common);
+ struct nbl_chan_ring *txq = &chan_info->txq;
+ size_t size =
+ chan_info->num_txq_entries * sizeof(struct nbl_chan_tx_desc);
+
+ txq->desc = dmam_alloc_coherent(dma_dev, size, &txq->dma,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!txq->desc)
+ return -ENOMEM;
+
+ chan_info->wait = devm_kcalloc(dev, chan_info->num_txq_entries,
+ sizeof(struct nbl_chan_waitqueue_head),
+ GFP_KERNEL);
+ if (!chan_info->wait)
+ goto req_wait_queue_failed;
+
+ txq->buf = devm_kcalloc(dev, chan_info->num_txq_entries,
+ sizeof(struct nbl_chan_buf), GFP_KERNEL);
+ if (!txq->buf)
+ goto req_num_txq_entries;
+
+ return 0;
+
+req_num_txq_entries:
+ devm_kfree(dev, chan_info->wait);
+req_wait_queue_failed:
+ dmam_free_coherent(dma_dev, size, txq->desc, txq->dma);
+
+ txq->desc = NULL;
+ txq->dma = 0;
+ chan_info->wait = NULL;
+ return -ENOMEM;
+}
+
+static int nbl_chan_init_rx_queue(struct nbl_common_info *common,
+ struct nbl_chan_info *chan_info)
+{
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(common);
+ struct nbl_chan_ring *rxq = &chan_info->rxq;
+ size_t size =
+ chan_info->num_rxq_entries * sizeof(struct nbl_chan_rx_desc);
+
+ rxq->desc = dmam_alloc_coherent(dma_dev, size, &rxq->dma,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!rxq->desc) {
+ dev_err(dev,
+ "Allocate DMA for chan rx descriptor ring failed\n");
+ return -ENOMEM;
+ }
+
+ rxq->buf = devm_kcalloc(dev, chan_info->num_rxq_entries,
+ sizeof(struct nbl_chan_buf), GFP_KERNEL);
+ if (!rxq->buf) {
+ dmam_free_coherent(dma_dev, size, rxq->desc, rxq->dma);
+ rxq->desc = NULL;
+ rxq->dma = 0;
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void nbl_chan_remove_tx_queue(struct nbl_common_info *common,
+ struct nbl_chan_info *chan_info)
+{
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(common);
+ struct nbl_chan_ring *txq = &chan_info->txq;
+ size_t size =
+ chan_info->num_txq_entries * sizeof(struct nbl_chan_tx_desc);
+
+ devm_kfree(dev, txq->buf);
+ txq->buf = NULL;
+
+ devm_kfree(dev, chan_info->wait);
+ chan_info->wait = NULL;
+
+ dmam_free_coherent(dma_dev, size, txq->desc, txq->dma);
+ txq->desc = NULL;
+ txq->dma = 0;
+}
+
+static void nbl_chan_remove_rx_queue(struct nbl_common_info *common,
+ struct nbl_chan_info *chan_info)
+{
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(common);
+ struct nbl_chan_ring *rxq = &chan_info->rxq;
+ size_t size =
+ chan_info->num_rxq_entries * sizeof(struct nbl_chan_rx_desc);
+
+ devm_kfree(dev, rxq->buf);
+ rxq->buf = NULL;
+
+ dmam_free_coherent(dma_dev, size, rxq->desc, rxq->dma);
+ rxq->desc = NULL;
+ rxq->dma = 0;
+}
+
+static int nbl_chan_init_queue(struct nbl_common_info *common,
+ struct nbl_chan_info *chan_info)
+{
+ int err;
+
+ err = nbl_chan_init_tx_queue(common, chan_info);
+ if (err)
+ return err;
+
+ err = nbl_chan_init_rx_queue(common, chan_info);
+ if (err)
+ goto setup_rx_queue_err;
+
+ return 0;
+
+setup_rx_queue_err:
+ nbl_chan_remove_tx_queue(common, chan_info);
+ return err;
+}
+
+static void nbl_chan_config_queue(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info, bool tx)
+{
+ struct nbl_hw_ops *hw_ops;
+ struct nbl_chan_ring *ring;
+ dma_addr_t dma_addr;
+ void *p = NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt);
+ int size_bwid = ilog2(chan_info->num_rxq_entries);
+
+ hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt);
+
+ if (tx)
+ ring = &chan_info->txq;
+ else
+ ring = &chan_info->rxq;
+
+ dma_addr = ring->dma;
+
+ if (tx)
+ hw_ops->config_mailbox_txq(p, dma_addr, size_bwid);
+ else
+ hw_ops->config_mailbox_rxq(p, dma_addr, size_bwid);
+}
+
+static int nbl_chan_alloc_all_tx_bufs(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(chan_mgt->common);
+ struct device *dev = NBL_COMMON_TO_DEV(chan_mgt->common);
+ struct nbl_chan_ring *txq = &chan_info->txq;
+ struct nbl_chan_buf *buf;
+ u16 i;
+
+ for (i = 0; i < chan_info->num_txq_entries; i++) {
+ buf = &txq->buf[i];
+ buf->va = dmam_alloc_coherent(dma_dev, chan_info->txq_buf_size,
+ &buf->pa,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!buf->va) {
+ dev_err(dev,
+ "Allocate buffer for chan tx queue failed\n");
+ goto err;
+ }
+ }
+
+ txq->next_to_clean = 0;
+ txq->next_to_use = 0;
+ txq->tail_ptr = 0;
+
+ return 0;
+err:
+ while (i--) {
+ buf = &txq->buf[i];
+ dmam_free_coherent(dma_dev, chan_info->txq_buf_size, buf->va,
+ buf->pa);
+ buf->va = NULL;
+ buf->pa = 0;
+ }
+
+ return -ENOMEM;
+}
+
+static int
+nbl_chan_cfg_mailbox_qinfo_map_table(struct nbl_channel_mgt *chan_mgt)
+{
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ struct nbl_hw_ops *hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt);
+ void *p = NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt);
+ u16 func_id;
+ u32 pf_mask;
+
+ pf_mask = hw_ops->get_host_pf_mask(p);
+ for (func_id = 0; func_id < NBL_MAX_PF; func_id++) {
+ if (!(pf_mask & (1 << func_id)))
+ hw_ops->cfg_mailbox_qinfo(p, func_id,
+ common->hw_bus,
+ common->devid,
+ common->function + func_id);
+ }
+
+ return 0;
+}
+
+static int nbl_chan_cfg_qinfo_map_table(void *priv, u8 chan_type)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+
+ return nbl_chan_cfg_mailbox_qinfo_map_table(chan_mgt);
+}
+
+static void nbl_chan_free_all_tx_bufs(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(chan_mgt->common);
+ struct nbl_chan_ring *txq = &chan_info->txq;
+ struct nbl_chan_buf *buf;
+ u16 i;
+
+ for (i = 0; i < chan_info->num_txq_entries; i++) {
+ buf = &txq->buf[i];
+ dmam_free_coherent(dma_dev, chan_info->txq_buf_size, buf->va,
+ buf->pa);
+ buf->va = NULL;
+ buf->pa = 0;
+ }
+}
+
+#define NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, tail_ptr, qid)\
+do { \
+ (void)(chan_info); \
+ typeof(hw_ops) _hw_ops = (hw_ops); \
+ typeof(chan_mgt) _chan_mgt = (chan_mgt); \
+ typeof(tail_ptr) _tail_ptr = (tail_ptr); \
+ typeof(qid) _qid = (qid); \
+ (_hw_ops)->update_mailbox_queue_tail_ptr( \
+ NBL_CHAN_MGT_TO_HW_PRIV(_chan_mgt), _tail_ptr, _qid); \
+} while (0)
+
+static int nbl_chan_alloc_all_rx_bufs(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(chan_mgt->common);
+ struct device *dev = NBL_COMMON_TO_DEV(chan_mgt->common);
+ struct nbl_chan_ring *rxq = &chan_info->rxq;
+ struct nbl_chan_rx_desc *desc;
+ struct nbl_chan_buf *buf;
+ struct nbl_hw_ops *hw_ops;
+ u32 retry_times = 0;
+ u16 i;
+
+ hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt);
+
+ for (i = 0; i < chan_info->num_rxq_entries; i++) {
+ buf = &rxq->buf[i];
+ buf->va = dmam_alloc_coherent(dma_dev, chan_info->rxq_buf_size,
+ &buf->pa,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!buf->va) {
+ dev_err(dev,
+ "Allocate buffer for chan rx queue failed\n");
+ goto err;
+ }
+ }
+
+ desc = rxq->desc;
+ for (i = 0; i < chan_info->num_rxq_entries - 1; i++) {
+ buf = &rxq->buf[i];
+ desc[i].flags = NBL_CHAN_RX_DESC_AVAIL;
+ desc[i].buf_addr = buf->pa;
+ desc[i].buf_len = chan_info->rxq_buf_size;
+ }
+
+ rxq->next_to_clean = 0;
+ rxq->next_to_use = chan_info->num_rxq_entries - 1;
+ rxq->tail_ptr = chan_info->num_rxq_entries - 1;
+
+ /* mb for notify */
+ mb();
+
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, rxq->tail_ptr,
+ NBL_MB_RX_QID);
+
+ for (retry_times = 0; retry_times < 3; retry_times++) {
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt,
+ rxq->tail_ptr, NBL_MB_RX_QID);
+ usleep_range(NBL_CHAN_TX_WAIT_US * 50,
+ NBL_CHAN_TX_WAIT_US * 60);
+ }
+
+ return 0;
+err:
+ while (i--) {
+ buf = &rxq->buf[i];
+ dmam_free_coherent(dma_dev, chan_info->rxq_buf_size, buf->va,
+ buf->pa);
+ buf->va = NULL;
+ buf->pa = 0;
+ }
+
+ return -ENOMEM;
+}
+
+static void nbl_chan_free_all_rx_bufs(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(chan_mgt->common);
+ struct nbl_chan_ring *rxq = &chan_info->rxq;
+ struct nbl_chan_buf *buf;
+ u16 i;
+
+ for (i = 0; i < chan_info->num_rxq_entries; i++) {
+ buf = &rxq->buf[i];
+ dmam_free_coherent(dma_dev, chan_info->rxq_buf_size, buf->va,
+ buf->pa);
+ buf->va = NULL;
+ buf->pa = 0;
+ }
+}
+
+static int nbl_chan_alloc_all_bufs(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ int err;
+
+ err = nbl_chan_alloc_all_tx_bufs(chan_mgt, chan_info);
+ if (err)
+ return err;
+
+ err = nbl_chan_alloc_all_rx_bufs(chan_mgt, chan_info);
+ if (err)
+ goto alloc_rx_bufs_err;
+
+ return 0;
+
+alloc_rx_bufs_err:
+ nbl_chan_free_all_tx_bufs(chan_mgt, chan_info);
+ return err;
+}
+
+static void nbl_chan_stop_queue(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ struct nbl_hw_ops *hw_ops;
+
+ hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt);
+
+ hw_ops->stop_mailbox_rxq(NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt));
+ hw_ops->stop_mailbox_txq(NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt));
+}
+
+static void nbl_chan_free_all_bufs(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ nbl_chan_free_all_tx_bufs(chan_mgt, chan_info);
+ nbl_chan_free_all_rx_bufs(chan_mgt, chan_info);
+}
+
+static void nbl_chan_remove_queue(struct nbl_common_info *common,
+ struct nbl_chan_info *chan_info)
+{
+ nbl_chan_remove_tx_queue(common, chan_info);
+ nbl_chan_remove_rx_queue(common, chan_info);
+}
+
+static int nbl_chan_teardown_queue(void *priv, u8 chan_type)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+ struct nbl_common_info *common = chan_mgt->common;
+
+ nbl_chan_stop_queue(chan_mgt, chan_info);
+ nbl_chan_free_all_bufs(chan_mgt, chan_info);
+ nbl_chan_remove_queue(common, chan_info);
+ return 0;
+}
+
+static int nbl_chan_setup_queue(void *priv, u8 chan_type)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ int err;
+
+ nbl_chan_init_queue_param(chan_info, NBL_CHAN_QUEUE_LEN,
+ NBL_CHAN_QUEUE_LEN, NBL_CHAN_BUF_LEN,
+ NBL_CHAN_BUF_LEN);
+ err = nbl_chan_init_queue(common, chan_info);
+ if (err)
+ return err;
+
+ nbl_chan_config_queue(chan_mgt, chan_info, true); /* tx */
+ nbl_chan_config_queue(chan_mgt, chan_info, false); /* rx */
+
+ err = nbl_chan_alloc_all_bufs(chan_mgt, chan_info);
+ if (err)
+ goto chan_q_setup_fail;
+
+ return 0;
+
+chan_q_setup_fail:
+ nbl_chan_teardown_queue(chan_mgt, chan_type);
+ return err;
+}
+
+static void nbl_chan_shutdown_queue(struct nbl_channel_mgt *chan_mgt,
+ u8 chan_type, bool tx)
+{
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ void *p = NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt);
+ struct nbl_hw_ops *hw_ops;
+
+ hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt);
+
+ if (tx) {
+ hw_ops->stop_mailbox_txq(p);
+ nbl_chan_free_all_tx_bufs(chan_mgt, chan_info);
+ nbl_chan_remove_tx_queue(common, chan_info);
+ } else {
+ hw_ops->stop_mailbox_rxq(p);
+ nbl_chan_free_all_rx_bufs(chan_mgt, chan_info);
+ nbl_chan_remove_rx_queue(common, chan_info);
+ }
+}
+
+static int nbl_chan_start_txq(struct nbl_channel_mgt *chan_mgt, u8 chan_type)
+{
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ int ret;
+
+ ret = nbl_chan_init_tx_queue(common, chan_info);
+ if (ret)
+ return ret;
+
+ nbl_chan_config_queue(chan_mgt, chan_info, true); /* tx */
+
+ ret = nbl_chan_alloc_all_tx_bufs(chan_mgt, chan_info);
+ if (ret)
+ goto alloc_buf_failed;
+
+ return 0;
+
+alloc_buf_failed:
+ nbl_chan_shutdown_queue(chan_mgt, chan_type, true);
+ return ret;
+}
+
+static int nbl_chan_start_rxq(struct nbl_channel_mgt *chan_mgt, u8 chan_type)
+{
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ int ret;
+
+ ret = nbl_chan_init_rx_queue(common, chan_info);
+ if (ret)
+ return ret;
+
+ nbl_chan_config_queue(chan_mgt, chan_info, false); /* rx */
+
+ ret = nbl_chan_alloc_all_rx_bufs(chan_mgt, chan_info);
+ if (ret)
+ goto alloc_buf_failed;
+
+ return 0;
+
+alloc_buf_failed:
+ nbl_chan_shutdown_queue(chan_mgt, chan_type, false);
+ return ret;
+}
+
+static int nbl_chan_reset_queue(struct nbl_channel_mgt *chan_mgt, u8 chan_type,
+ bool tx)
+{
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+ int i = 0, j = 0, ret = 0;
+
+ /* If someone else is doing resetting, don't bother */
+ if (test_bit(NBL_CHAN_RESETTING, chan_info->state))
+ return 0;
+
+ /* Make sure rx won't enter if we are resetting */
+ set_bit(NBL_CHAN_RESETTING, chan_info->state);
+ if (chan_info->clean_task)
+ nbl_common_flush_task(chan_info->clean_task);
+
+ /* Make sure tx won't enter if we are resetting */
+ spin_lock(&chan_info->txq_lock);
+
+ /* If we are in a race, and someone else has finished it, just return */
+ if (!test_bit(NBL_CHAN_RESETTING, chan_info->state)) {
+ spin_unlock(&chan_info->txq_lock);
+ return 0;
+ }
+
+ /* Make sure no one is waiting before we reset. */
+ while (i++ < (NBL_CHAN_ACK_WAIT_TIME * 2) / HZ) {
+ for (j = 0; j < NBL_CHAN_QUEUE_LEN; j++)
+ if (chan_info->wait[j].status == NBL_MBX_STATUS_WAITING)
+ break;
+
+ if (j == NBL_CHAN_QUEUE_LEN)
+ break;
+ mdelay(1000);
+ }
+
+ if (j != NBL_CHAN_QUEUE_LEN) {
+ nbl_warn(NBL_CHAN_MGT_TO_COMMON(chan_mgt),
+ "Some wait_head unreleased, fail to reset");
+ clear_bit(NBL_CHAN_RESETTING, chan_info->state);
+ spin_unlock(&chan_info->txq_lock);
+ return 0;
+ }
+
+ nbl_chan_shutdown_queue(chan_mgt, chan_type, tx);
+
+ if (tx)
+ ret = nbl_chan_start_txq(chan_mgt, chan_type);
+ else
+ ret = nbl_chan_start_rxq(chan_mgt, chan_type);
+
+ /* Make sure we clear this bit inside lock, so that we don't reset it
+ * twice if race
+ */
+ clear_bit(NBL_CHAN_RESETTING, chan_info->state);
+ spin_unlock(&chan_info->txq_lock);
+
+ return ret;
+}
+
+static bool nbl_chan_check_dma_err(struct nbl_channel_mgt *chan_mgt,
+ u8 chan_type, bool tx)
+{
+ struct nbl_hw_ops *hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt);
+ void *p = NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt);
+
+ return hw_ops->check_mailbox_dma_err(p, tx);
+}
+
+static int nbl_chan_update_txqueue(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info,
+ struct nbl_chan_tx_param *param)
+{
+ struct nbl_chan_ring *txq = &chan_info->txq;
+ struct nbl_chan_tx_desc *tx_desc =
+ NBL_CHAN_TX_RING_TO_DESC(txq, txq->next_to_use);
+ struct nbl_chan_buf *tx_buf =
+ NBL_CHAN_TX_RING_TO_BUF(txq, txq->next_to_use);
+
+ if (param->arg_len > NBL_CHAN_BUF_LEN - sizeof(*tx_desc))
+ return -EINVAL;
+
+ tx_desc->dstid = param->dstid;
+ tx_desc->msg_type = param->msg_type;
+ tx_desc->msgid = param->msgid;
+
+ if (param->arg_len > NBL_CHAN_TX_DESC_EMBEDDED_DATA_LEN) {
+ memcpy(tx_buf->va, param->arg, param->arg_len);
+ tx_desc->buf_addr = tx_buf->pa;
+ tx_desc->buf_len = param->arg_len;
+ tx_desc->data_len = 0;
+ } else {
+ memcpy(tx_desc->data, param->arg, param->arg_len);
+ tx_desc->buf_len = 0;
+ tx_desc->data_len = param->arg_len;
+ }
+ tx_desc->flags = NBL_CHAN_TX_DESC_AVAIL;
+
+ /* wmb */
+ wmb();
+ txq->next_to_use =
+ NBL_NEXT_ID(txq->next_to_use, chan_info->num_txq_entries - 1);
+ txq->tail_ptr++;
+
+ return 0;
+}
+
+static int nbl_chan_kick_tx_ring(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ struct nbl_hw_ops *hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt);
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ struct nbl_chan_ring *txq = &chan_info->txq;
+ struct nbl_chan_tx_desc *tx_desc;
+ int i = 0;
+
+ /* mb for tx notify */
+ mb();
+
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, txq->tail_ptr,
+ NBL_MB_TX_QID);
+
+ tx_desc = NBL_CHAN_TX_RING_TO_DESC(txq, txq->next_to_clean);
+
+ while (!(tx_desc->flags & NBL_CHAN_TX_DESC_USED)) {
+ udelay(NBL_CHAN_TX_WAIT_US);
+ i++;
+
+ if (!(i % NBL_CHAN_TX_REKICK_WAIT_TIMES))
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt,
+ txq->tail_ptr, NBL_MB_TX_QID);
+
+ if (i == NBL_CHAN_TX_WAIT_TIMES) {
+ nbl_err(common, "chan send message type: %d timeout\n",
+ tx_desc->msg_type);
+ return -EAGAIN;
+ }
+ }
+
+ txq->next_to_clean = txq->next_to_use;
+ return 0;
+}
+
+static void nbl_chan_recv_ack_msg(void *priv, u16 srcid, u16 msgid, void *data,
+ u32 data_len)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ struct nbl_chan_info *chan_info = NULL;
+ struct nbl_chan_waitqueue_head *wait_head = NULL;
+ union nbl_chan_msg_id ack_msgid = { { 0 } };
+ u32 *payload = (u32 *)data;
+ u32 ack_datalen = 0, ack_msgtype = 0, copy_len = 0;
+
+ chan_info = NBL_CHAN_MGT_TO_MBX(chan_mgt);
+
+ ack_datalen = data_len - 3 * sizeof(u32);
+ ack_msgtype = *payload;
+ ack_msgid.id = *(u16 *)(payload + 1);
+ wait_head = &chan_info->wait[ack_msgid.info.loc];
+ wait_head->ack_err = *(payload + 2);
+ chan_info->failed_cnt = 0;
+
+ if (wait_head->msg_type != ack_msgtype) {
+ nbl_warn(common,
+ "Skip ack msg type %d donot match msg type %d\n",
+ ack_msgtype, wait_head->msg_type);
+ return;
+ }
+
+ if (wait_head->status != NBL_MBX_STATUS_WAITING) {
+ nbl_warn(common, "Skip ack with status %d", wait_head->status);
+ return;
+ }
+
+ if (wait_head->msg_index != ack_msgid.info.index) {
+ nbl_warn(common, "Skip ack index %d donot match index %d",
+ ack_msgid.info.index, wait_head->msg_index);
+ return;
+ }
+
+ if (ack_datalen != wait_head->ack_data_len)
+ nbl_debug(common,
+ "Channel payload_len donot match ack_data_len, msgtype:%u, msgid:%u, rcv_data_len:%u, expect_data_len:%u\n",
+ ack_msgtype, ack_msgid.id, ack_datalen,
+ wait_head->ack_data_len);
+
+ copy_len = min_t(u32, wait_head->ack_data_len, ack_datalen);
+ if (wait_head->ack_err >= 0 && copy_len > 0)
+ memcpy((char *)wait_head->ack_data, payload + 3, copy_len);
+ wait_head->ack_data_len = (u16)copy_len;
+
+ /* wmb */
+ wmb();
+ wait_head->acked = 1;
+ if (wait_head->need_waked)
+ wake_up(&wait_head->wait_queue);
+}
+
+static void nbl_chan_recv_msg(struct nbl_channel_mgt *chan_mgt, void *data,
+ u32 data_len)
+{
+ struct device *dev = NBL_COMMON_TO_DEV(chan_mgt->common);
+ struct nbl_chan_ack_info chan_ack;
+ struct nbl_chan_tx_desc *tx_desc;
+ struct nbl_chan_msg_node_data *msg_handler;
+ u16 msg_type, payload_len, srcid, msgid;
+ void *payload;
+
+ tx_desc = data;
+ msg_type = tx_desc->msg_type;
+ dev_dbg(dev, "recv msg_type: %d\n", tx_desc->msg_type);
+
+ srcid = tx_desc->srcid;
+ msgid = tx_desc->msgid;
+ if (msg_type >= NBL_CHAN_MSG_MAX)
+ goto send_warning;
+
+ if (tx_desc->data_len) {
+ payload = (void *)tx_desc->data;
+ payload_len = tx_desc->data_len;
+ } else {
+ payload = (void *)(tx_desc + 1);
+ payload_len = tx_desc->buf_len;
+ }
+
+ msg_handler =
+ nbl_common_get_hash_node(chan_mgt->handle_hash_tbl, &msg_type);
+ if (msg_handler) {
+ msg_handler->func(msg_handler->priv, srcid, msgid, payload,
+ payload_len);
+ return;
+ }
+
+send_warning:
+ NBL_CHAN_ACK(chan_ack, srcid, msg_type, msgid, -EPERM, NULL, 0);
+ nbl_chan_send_ack(chan_mgt, &chan_ack);
+ dev_warn(dev, "Recv channel msg_type: %d, but msg_handler is null!\n",
+ msg_type);
+}
+
+static void nbl_chan_advance_rx_ring(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info,
+ struct nbl_chan_ring *rxq)
+{
+ struct nbl_chan_rx_desc *rx_desc;
+ struct nbl_hw_ops *hw_ops;
+ struct nbl_chan_buf *rx_buf;
+ u16 next_to_use;
+
+ hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt);
+
+ next_to_use = rxq->next_to_use;
+ rx_desc = NBL_CHAN_RX_RING_TO_DESC(rxq, next_to_use);
+ rx_buf = NBL_CHAN_RX_RING_TO_BUF(rxq, next_to_use);
+
+ rx_desc->flags = NBL_CHAN_RX_DESC_AVAIL;
+ rx_desc->buf_addr = rx_buf->pa;
+ rx_desc->buf_len = chan_info->rxq_buf_size;
+
+ /* wmb */
+ wmb();
+ rxq->next_to_use++;
+ if (rxq->next_to_use == chan_info->num_rxq_entries)
+ rxq->next_to_use = 0;
+ rxq->tail_ptr++;
+
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, rxq->tail_ptr,
+ NBL_MB_RX_QID);
+}
+
+static void nbl_chan_clean_queue(struct nbl_channel_mgt *chan_mgt,
+ struct nbl_chan_info *chan_info)
+{
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ struct nbl_chan_ring *rxq = &chan_info->rxq;
+ struct nbl_chan_rx_desc *rx_desc;
+ struct nbl_chan_buf *rx_buf;
+ u16 next_to_clean;
+
+ next_to_clean = rxq->next_to_clean;
+ rx_desc = NBL_CHAN_RX_RING_TO_DESC(rxq, next_to_clean);
+ rx_buf = NBL_CHAN_RX_RING_TO_BUF(rxq, next_to_clean);
+ while (rx_desc->flags & NBL_CHAN_RX_DESC_USED) {
+ if (!(rx_desc->flags & NBL_CHAN_RX_DESC_WRITE))
+ nbl_debug(common,
+ "mailbox rx flag 0x%x has no NBL_CHAN_RX_DESC_WRITE\n",
+ rx_desc->flags);
+
+ dma_rmb();
+ nbl_chan_recv_msg(chan_mgt, rx_buf->va, rx_desc->buf_len);
+
+ nbl_chan_advance_rx_ring(chan_mgt, chan_info, rxq);
+
+ next_to_clean++;
+ if (next_to_clean == chan_info->num_rxq_entries)
+ next_to_clean = 0;
+ rx_desc = NBL_CHAN_RX_RING_TO_DESC(rxq, next_to_clean);
+ rx_buf = NBL_CHAN_RX_RING_TO_BUF(rxq, next_to_clean);
+ }
+ rxq->next_to_clean = next_to_clean;
+}
+
+static void nbl_chan_clean_queue_subtask(void *priv, u8 chan_type)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+
+ if (!test_bit(NBL_CHAN_INTERRUPT_READY, chan_info->state) ||
+ test_bit(NBL_CHAN_RESETTING, chan_info->state))
+ return;
+
+ nbl_chan_clean_queue(chan_mgt, chan_info);
+}
+
+static int nbl_chan_get_msg_id(struct nbl_chan_info *chan_info,
+ union nbl_chan_msg_id *msgid)
+{
+ struct nbl_chan_waitqueue_head *wait = NULL;
+ int valid_loc = chan_info->wait_head_index, i;
+
+ for (i = 0; i < NBL_CHAN_QUEUE_LEN; i++) {
+ wait = &chan_info->wait[valid_loc];
+
+ if (wait->status != NBL_MBX_STATUS_WAITING) {
+ wait->msg_index = NBL_NEXT_ID(wait->msg_index,
+ NBL_CHAN_MSG_INDEX_MAX);
+ msgid->info.index = wait->msg_index;
+ msgid->info.loc = valid_loc;
+
+ valid_loc = NBL_NEXT_ID(valid_loc,
+ chan_info->num_txq_entries - 1);
+ chan_info->wait_head_index = valid_loc;
+ return 0;
+ }
+
+ valid_loc =
+ NBL_NEXT_ID(valid_loc, chan_info->num_txq_entries - 1);
+ }
+
+ return -ENOSPC;
+}
+
+static int nbl_chan_send_msg(void *priv, struct nbl_chan_send_info *chan_send)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ struct nbl_chan_info *chan_info = NBL_CHAN_MGT_TO_MBX(chan_mgt);
+ struct nbl_chan_waitqueue_head *wait_head;
+ union nbl_chan_msg_id msgid = { { 0 } };
+ struct nbl_chan_tx_param tx_param = { 0 };
+ int i = NBL_CHAN_TX_WAIT_ACK_TIMES, resend_times = 0, ret = 0;
+ bool need_resend = true; /* neend resend when ack timeout*/
+
+ if (chan_send->arg_len >
+ NBL_CHAN_BUF_LEN - sizeof(struct nbl_chan_tx_desc))
+ return -EINVAL;
+
+ if (test_bit(NBL_CHAN_ABNORMAL, chan_info->state))
+ return -EFAULT;
+
+ if (chan_info->failed_cnt >= NBL_CHANNEL_FREEZE_FAILED_CNT)
+ return -EFAULT;
+
+resend:
+ spin_lock(&chan_info->txq_lock);
+
+ ret = nbl_chan_get_msg_id(chan_info, &msgid);
+ if (ret) {
+ spin_unlock(&chan_info->txq_lock);
+ nbl_err(common,
+ "Channel tx wait head full, send msgtype:%u to dstid:%u failed\n",
+ chan_send->msg_type, chan_send->dstid);
+ return ret;
+ }
+
+ tx_param.msg_type = chan_send->msg_type;
+ tx_param.arg = chan_send->arg;
+ tx_param.arg_len = chan_send->arg_len;
+ tx_param.dstid = chan_send->dstid;
+ tx_param.msgid = msgid.id;
+
+ ret = nbl_chan_update_txqueue(chan_mgt, chan_info, &tx_param);
+ if (ret) {
+ spin_unlock(&chan_info->txq_lock);
+ nbl_err(common,
+ "Channel tx queue full, send msgtype:%u to dstid:%u failed\n",
+ chan_send->msg_type, chan_send->dstid);
+ return ret;
+ }
+
+ wait_head = &chan_info->wait[msgid.info.loc];
+ init_waitqueue_head(&wait_head->wait_queue);
+ wait_head->acked = 0;
+ wait_head->ack_data = chan_send->resp;
+ wait_head->ack_data_len = chan_send->resp_len;
+ wait_head->msg_type = chan_send->msg_type;
+ wait_head->need_waked = chan_send->ack;
+ wait_head->msg_index = msgid.info.index;
+ wait_head->status = chan_send->ack ? NBL_MBX_STATUS_WAITING :
+ NBL_MBX_STATUS_IDLE;
+
+ ret = nbl_chan_kick_tx_ring(chan_mgt, chan_info);
+
+ spin_unlock(&chan_info->txq_lock);
+
+ if (ret) {
+ wait_head->status = NBL_MBX_STATUS_TIMEOUT;
+ goto check_tx_dma_err;
+ }
+
+ if (!chan_send->ack)
+ return 0;
+
+ if (chan_send->dstid != common->mgt_pf &&
+ chan_send->msg_type != NBL_CHAN_MSG_KEEP_ALIVE)
+ need_resend = false;
+
+ if (test_bit(NBL_CHAN_INTERRUPT_READY, chan_info->state)) {
+ ret = wait_event_timeout(wait_head->wait_queue,
+ wait_head->acked,
+ NBL_CHAN_ACK_WAIT_TIME);
+ if (!ret) {
+ wait_head->status = NBL_MBX_STATUS_TIMEOUT;
+ if (!need_resend) {
+ chan_info->failed_cnt++;
+ return 0;
+ }
+ nbl_err(common,
+ "Channel waiting ack failed, message type: %d, msg id: %u\n",
+ chan_send->msg_type, msgid.id);
+ goto check_rx_dma_err;
+ }
+
+ /* rmb for waithead ack */
+ rmb();
+ chan_send->ack_len = wait_head->ack_data_len;
+ wait_head->status = NBL_MBX_STATUS_IDLE;
+ chan_info->failed_cnt = 0;
+
+ return wait_head->ack_err;
+ }
+
+ /*polling wait mailbox ack*/
+ while (i--) {
+ nbl_chan_clean_queue(chan_mgt, chan_info);
+
+ if (wait_head->acked) {
+ chan_send->ack_len = wait_head->ack_data_len;
+ wait_head->status = NBL_MBX_STATUS_IDLE;
+ chan_info->failed_cnt = 0;
+ return wait_head->ack_err;
+ }
+ usleep_range(NBL_CHAN_TX_WAIT_ACK_US_MIN,
+ NBL_CHAN_TX_WAIT_ACK_US_MAX);
+ }
+
+ wait_head->status = NBL_MBX_STATUS_TIMEOUT;
+ nbl_err(common,
+ "Channel polling ack failed, message type: %d msg id: %u\n",
+ chan_send->msg_type, msgid.id);
+
+check_rx_dma_err:
+ if (nbl_chan_check_dma_err(chan_mgt, chan_info->chan_type, false)) {
+ nbl_err(common, "nbl channel rx dma error\n");
+ nbl_chan_reset_queue(chan_mgt, chan_info->chan_type, false);
+ chan_info->rxq_reset_times++;
+ }
+
+check_tx_dma_err:
+ if (nbl_chan_check_dma_err(chan_mgt, chan_info->chan_type, true)) {
+ nbl_err(common, "nbl channel tx dma error\n");
+ nbl_chan_reset_queue(chan_mgt, chan_info->chan_type, true);
+ chan_info->txq_reset_times++;
+ }
+
+ if (++resend_times >= NBL_CHAN_RESEND_MAX_TIMES) {
+ nbl_err(common, "nbl channel resend_times %d\n", resend_times);
+ chan_info->failed_cnt++;
+
+ return -EFAULT;
+ }
+
+ i = NBL_CHAN_TX_WAIT_ACK_TIMES;
+ goto resend;
+}
+
+static int nbl_chan_send_ack(void *priv, struct nbl_chan_ack_info *chan_ack)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ u32 len = 3 * sizeof(u32) + chan_ack->data_len;
+ struct nbl_chan_send_info chan_send;
+ u32 *tmp;
+
+ tmp = kzalloc(len, GFP_ATOMIC);
+ if (!tmp)
+ return -ENOMEM;
+
+ tmp[0] = chan_ack->msg_type;
+ tmp[1] = chan_ack->msgid;
+ tmp[2] = (u32)chan_ack->err;
+ if (chan_ack->data && chan_ack->data_len)
+ memcpy(&tmp[3], chan_ack->data, chan_ack->data_len);
+
+ NBL_CHAN_SEND(chan_send, chan_ack->dstid, NBL_CHAN_MSG_ACK, tmp, len,
+ NULL, 0, 0);
+ nbl_chan_send_msg(chan_mgt, &chan_send);
+ kfree(tmp);
+
+ return 0;
+}
+
+static void nbl_chan_unregister_msg(void *priv, u16 msg_type)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+
+ nbl_chan_delete_msg_handler(chan_mgt, msg_type);
+}
+
+static int nbl_chan_register_msg(void *priv, u16 msg_type, nbl_chan_resp func,
+ void *callback_priv)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ int ret;
+
+ ret = nbl_chan_add_msg_handler(chan_mgt, msg_type, func, callback_priv);
+
+ return ret;
+}
+
+static bool nbl_chan_check_queue_exist(void *priv, u8 chan_type)
+{
+ struct nbl_channel_mgt *chan_mgt;
+ struct nbl_chan_info *chan_info;
+
+ if (!priv)
+ return false;
+
+ chan_mgt = (struct nbl_channel_mgt *)priv;
+ chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+
+ return chan_info ? true : false;
+}
+
+static void nbl_chan_register_chan_task(void *priv, u8 chan_type,
+ struct work_struct *task)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+
+ chan_info->clean_task = task;
+}
+
+static void nbl_chan_set_queue_state(void *priv, enum nbl_chan_state state,
+ u8 chan_type, u8 set)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_chan_info *chan_info =
+ NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type);
+
+ if (set)
+ set_bit(state, chan_info->state);
+ else
+ clear_bit(state, chan_info->state);
+}
+
static struct nbl_channel_ops chan_ops = {
+ .send_msg = nbl_chan_send_msg,
+ .send_ack = nbl_chan_send_ack,
+ .register_msg = nbl_chan_register_msg,
+ .unregister_msg = nbl_chan_unregister_msg,
+ .cfg_chan_qinfo_map_table = nbl_chan_cfg_qinfo_map_table,
+ .check_queue_exist = nbl_chan_check_queue_exist,
+ .setup_queue = nbl_chan_setup_queue,
+ .teardown_queue = nbl_chan_teardown_queue,
+ .clean_queue_subtask = nbl_chan_clean_queue_subtask,
+ .register_chan_task = nbl_chan_register_chan_task,
+ .set_queue_state = nbl_chan_set_queue_state,
};
static int
@@ -18,6 +1148,7 @@ nbl_chan_setup_chan_mgt(struct nbl_adapter *adapter,
struct nbl_hw_ops_tbl *hw_ops_tbl;
struct nbl_chan_info *mailbox;
struct device *dev;
+ int ret;
dev = NBL_ADAP_TO_DEV(adapter);
common = NBL_ADAP_TO_COMMON(adapter);
@@ -38,7 +1169,14 @@ nbl_chan_setup_chan_mgt(struct nbl_adapter *adapter,
mailbox->chan_type = NBL_CHAN_TYPE_MAILBOX;
NBL_CHAN_MGT_TO_MBX(&(*chan_mgt_leonis)->chan_mgt) = mailbox;
+ ret = nbl_chan_init_msg_handler(&(*chan_mgt_leonis)->chan_mgt);
+ if (ret)
+ goto init_chan_msg_handle;
+
return 0;
+
+init_chan_msg_handle:
+ devm_kfree(dev, mailbox);
alloc_mailbox_fail:
devm_kfree(dev, *chan_mgt_leonis);
*chan_mgt_leonis = NULL;
@@ -52,6 +1190,8 @@ nbl_chan_remove_chan_mgt(struct nbl_common_info *common,
{
struct device *dev = NBL_COMMON_TO_DEV(common);
+ nbl_chan_remove_msg_handler(&(*chan_mgt_leonis)->chan_mgt);
+
devm_kfree(dev, NBL_CHAN_MGT_TO_MBX(&(*chan_mgt_leonis)->chan_mgt));
/* check and remove command queue */
@@ -73,6 +1213,8 @@ static int nbl_chan_setup_ops(struct device *dev,
struct nbl_channel_ops_tbl **chan_ops_tbl,
struct nbl_channel_mgt_leonis *chan_mgt)
{
+ int ret;
+
if (!chan_mgt)
return 0;
*chan_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_channel_ops_tbl),
@@ -82,7 +1224,19 @@ static int nbl_chan_setup_ops(struct device *dev,
NBL_CHAN_OPS_TBL_TO_OPS(*chan_ops_tbl) = &chan_ops;
NBL_CHAN_OPS_TBL_TO_PRIV(*chan_ops_tbl) = chan_mgt;
+
+ ret = nbl_chan_add_msg_handler(&chan_mgt->chan_mgt, NBL_CHAN_MSG_ACK,
+ nbl_chan_recv_ack_msg, chan_mgt);
+ if (ret)
+ goto err;
+
return 0;
+
+err:
+ devm_kfree(dev, *chan_ops_tbl);
+ *chan_ops_tbl = NULL;
+
+ return ret;
}
int nbl_chan_init_common(void *p, struct nbl_init_param *param)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h
index d207ce79d89f..5047a9237495 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h
@@ -22,9 +22,141 @@
#define NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type) \
((chan_mgt)->chan_info[chan_type])
+#define NBL_CHAN_TX_RING_TO_DESC(tx_ring, i) \
+ (&(((struct nbl_chan_tx_desc *)((tx_ring)->desc))[i]))
+#define NBL_CHAN_RX_RING_TO_DESC(rx_ring, i) \
+ (&(((struct nbl_chan_rx_desc *)((rx_ring)->desc))[i]))
+#define NBL_CHAN_TX_RING_TO_BUF(tx_ring, i) (&(((tx_ring)->buf)[i]))
+#define NBL_CHAN_RX_RING_TO_BUF(rx_ring, i) (&(((rx_ring)->buf)[i]))
+
+#define NBL_CHAN_TX_WAIT_US 100
+#define NBL_CHAN_TX_REKICK_WAIT_TIMES 2000
+#define NBL_CHAN_TX_WAIT_TIMES 30000
+
+#define NBL_CHAN_TX_WAIT_ACK_US_MIN 100
+#define NBL_CHAN_TX_WAIT_ACK_US_MAX 120
+#define NBL_CHAN_TX_WAIT_ACK_TIMES 50000
+
+#define NBL_CHAN_QUEUE_LEN 256
+#define NBL_CHAN_BUF_LEN 4096
+
+#define NBL_CHAN_TX_DESC_EMBEDDED_DATA_LEN 16
+#define NBL_CHAN_RESEND_MAX_TIMES 3
+
+#define NBL_CHAN_TX_DESC_AVAIL BIT(0)
+#define NBL_CHAN_TX_DESC_USED BIT(1)
+#define NBL_CHAN_RX_DESC_WRITE BIT(1)
+#define NBL_CHAN_RX_DESC_AVAIL BIT(3)
+#define NBL_CHAN_RX_DESC_USED BIT(4)
+
+#define NBL_CHAN_ACK_WAIT_TIME (3 * HZ)
+
+#define NBL_CHAN_HANDLER_TBL_BUCKET_SIZE 512
+
+enum {
+ NBL_MB_RX_QID = 0,
+ NBL_MB_TX_QID = 1,
+};
+
+enum {
+ NBL_MBX_STATUS_IDLE = 0,
+ NBL_MBX_STATUS_WAITING,
+ NBL_MBX_STATUS_TIMEOUT = -1,
+};
+
+struct nbl_chan_tx_param {
+ enum nbl_chan_msg_type msg_type;
+ void *arg;
+ size_t arg_len;
+ u16 dstid;
+ u16 msgid;
+};
+
+struct nbl_chan_buf {
+ void *va;
+ dma_addr_t pa;
+ size_t size;
+};
+
+struct nbl_chan_tx_desc {
+ u16 flags;
+ u16 srcid;
+ u16 dstid;
+ u16 data_len;
+ u16 buf_len;
+ u64 buf_addr;
+ u16 msg_type;
+ u8 data[16];
+ u16 msgid;
+ u8 rsv[26];
+} __packed;
+
+struct nbl_chan_rx_desc {
+ u16 flags;
+ u32 buf_len;
+ u16 buf_id;
+ u64 buf_addr;
+} __packed;
+
+struct nbl_chan_ring {
+ void *desc;
+ struct nbl_chan_buf *buf;
+ u16 next_to_use;
+ u16 tail_ptr;
+ u16 next_to_clean;
+ dma_addr_t dma;
+};
+
+#define NBL_CHAN_MSG_INDEX_MAX 63
+
+union nbl_chan_msg_id {
+ struct nbl_chan_msg_id_info {
+ u16 index : 6;
+ u16 loc : 10;
+ } info;
+ u16 id;
+};
+
+struct nbl_chan_waitqueue_head {
+ struct wait_queue_head wait_queue;
+ char *ack_data;
+ int acked;
+ int ack_err;
+ u16 ack_data_len;
+ u16 need_waked;
+ u16 msg_type;
+ u8 status;
+ u8 msg_index;
+};
+
struct nbl_chan_info {
+ struct nbl_chan_ring txq;
+ struct nbl_chan_ring rxq;
+ struct nbl_chan_waitqueue_head *wait;
+ /* spinlock_t */
+ spinlock_t txq_lock;
+
+ struct work_struct *clean_task;
+
+ u16 wait_head_index;
+ u16 num_txq_entries;
+ u16 num_rxq_entries;
+ u16 txq_buf_size;
+ u16 rxq_buf_size;
+
+ u16 txq_reset_times;
+ u16 rxq_reset_times;
+
+ DECLARE_BITMAP(state, NBL_CHAN_STATE_NBITS);
+
u8 chan_type;
+ /* three consecutive fails will freeze the queue */
+ u8 failed_cnt;
+};
+struct nbl_chan_msg_node_data {
+ nbl_chan_resp func;
+ void *priv;
};
struct nbl_channel_mgt {
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c
new file mode 100644
index 000000000000..c55149a0cb85
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c
@@ -0,0 +1,302 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include "nbl_common.h"
+
+struct nbl_common_wq_mgt {
+ struct workqueue_struct *ctrl_dev_wq;
+ struct workqueue_struct *net_dev_wq;
+};
+
+static struct nbl_common_wq_mgt *wq_mgt;
+
+void nbl_common_queue_work(struct work_struct *task, bool ctrl_task)
+{
+ if (ctrl_task)
+ queue_work(wq_mgt->ctrl_dev_wq, task);
+ else
+ queue_work(wq_mgt->net_dev_wq, task);
+}
+
+void nbl_common_release_task(struct work_struct *task)
+{
+ cancel_work_sync(task);
+}
+
+void nbl_common_alloc_task(struct work_struct *task, void *func)
+{
+ INIT_WORK(task, func);
+}
+
+void nbl_common_flush_task(struct work_struct *task)
+{
+ flush_work(task);
+}
+
+void nbl_common_destroy_wq(void)
+{
+ destroy_workqueue(wq_mgt->net_dev_wq);
+ destroy_workqueue(wq_mgt->ctrl_dev_wq);
+ kfree(wq_mgt);
+}
+
+int nbl_common_create_wq(void)
+{
+ wq_mgt = kzalloc(sizeof(*wq_mgt), GFP_KERNEL);
+ if (!wq_mgt)
+ return -ENOMEM;
+
+ wq_mgt->ctrl_dev_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_UNBOUND,
+ 0, "nbl_ctrldev_wq");
+ if (!wq_mgt->ctrl_dev_wq) {
+ pr_err("Failed to create workqueue nbl_ctrldev_wq\n");
+ goto alloc_ctrl_dev_wq_failed;
+ }
+
+ wq_mgt->net_dev_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_UNBOUND,
+ 0, "nbl_net_dev_wq");
+ if (!wq_mgt->net_dev_wq) {
+ pr_err("Failed to create workqueue nbl_net_dev_wq\n");
+ goto alloc_net_dev_wq_failed;
+ }
+
+ return 0;
+alloc_net_dev_wq_failed:
+ destroy_workqueue(wq_mgt->ctrl_dev_wq);
+alloc_ctrl_dev_wq_failed:
+ kfree(wq_mgt);
+ return -ENOMEM;
+}
+
+u32 nbl_common_pf_id_subtraction_mgtpf_id(struct nbl_common_info *common,
+ u32 pf_id)
+{
+ u32 diff = U32_MAX;
+
+ if (pf_id >= NBL_COMMON_TO_MGT_PF(common))
+ diff = pf_id - NBL_COMMON_TO_MGT_PF(common);
+
+ return diff;
+}
+
+static u32 nbl_common_calc_hash_key(void *key, u32 key_size, u32 bucket_size)
+{
+ u32 hash_val;
+ u32 value = 0;
+ u32 i;
+
+ /* if bucket size little than 1, the hash value always 0 */
+ if (bucket_size == NBL_HASH_TBL_LIST_BUCKET_SIZE)
+ return 0;
+
+ for (i = 0; i < key_size; i++)
+ value += *((u8 *)key + i);
+
+ hash_val = __hash_32(value);
+
+ return hash_val % bucket_size;
+}
+
+/*
+ * alloc a hash table
+ * the table support multi thread
+ */
+void *nbl_common_init_hash_table(struct nbl_hash_tbl_key *key)
+{
+ struct nbl_hash_tbl_mgt *tbl_mgt;
+ int bucket_size;
+ int i;
+
+ tbl_mgt = devm_kzalloc(key->dev, sizeof(struct nbl_hash_tbl_mgt),
+ GFP_KERNEL);
+ if (!tbl_mgt)
+ return NULL;
+
+ bucket_size = key->bucket_size;
+ tbl_mgt->hash = devm_kcalloc(key->dev, bucket_size,
+ sizeof(struct hlist_head), GFP_KERNEL);
+ if (!tbl_mgt->hash)
+ goto alloc_hash_failed;
+
+ for (i = 0; i < bucket_size; i++)
+ INIT_HLIST_HEAD(tbl_mgt->hash + i);
+
+ memcpy(&tbl_mgt->tbl_key, key, sizeof(struct nbl_hash_tbl_key));
+
+ if (key->lock_need)
+ mutex_init(&tbl_mgt->lock);
+
+ return tbl_mgt;
+
+alloc_hash_failed:
+ devm_kfree(key->dev, tbl_mgt);
+
+ return NULL;
+}
+
+/*
+ * alloc a hash node, and add to hlist_head
+ */
+int nbl_common_alloc_hash_node(void *priv, void *key, void *data,
+ void **out_data)
+{
+ struct nbl_hash_tbl_mgt *tbl_mgt = (struct nbl_hash_tbl_mgt *)priv;
+ struct nbl_hash_entry_node *hash_node;
+ u32 hash_val;
+ u16 key_size;
+ u16 data_size;
+
+ hash_node = devm_kzalloc(tbl_mgt->tbl_key.dev,
+ sizeof(struct nbl_hash_entry_node),
+ GFP_KERNEL);
+ if (!hash_node)
+ return -1;
+
+ key_size = tbl_mgt->tbl_key.key_size;
+ hash_node->key =
+ devm_kzalloc(tbl_mgt->tbl_key.dev, key_size, GFP_KERNEL);
+ if (!hash_node->key)
+ goto alloc_key_failed;
+
+ data_size = tbl_mgt->tbl_key.data_size;
+ hash_node->data =
+ devm_kzalloc(tbl_mgt->tbl_key.dev, data_size, GFP_KERNEL);
+ if (!hash_node->data)
+ goto alloc_data_failed;
+
+ memcpy(hash_node->key, key, key_size);
+ memcpy(hash_node->data, data, data_size);
+
+ hash_val = nbl_common_calc_hash_key(key, key_size,
+ tbl_mgt->tbl_key.bucket_size);
+
+ if (tbl_mgt->tbl_key.lock_need)
+ mutex_lock(&tbl_mgt->lock);
+
+ hlist_add_head(&hash_node->node, tbl_mgt->hash + hash_val);
+ tbl_mgt->node_num++;
+ if (out_data)
+ *out_data = hash_node->data;
+
+ if (tbl_mgt->tbl_key.lock_need)
+ mutex_unlock(&tbl_mgt->lock);
+
+ return 0;
+
+alloc_data_failed:
+ devm_kfree(tbl_mgt->tbl_key.dev, hash_node->key);
+alloc_key_failed:
+ devm_kfree(tbl_mgt->tbl_key.dev, hash_node);
+
+ return -1;
+}
+
+/*
+ * get a hash node, return the data if node exist
+ */
+void *nbl_common_get_hash_node(void *priv, void *key)
+{
+ struct nbl_hash_tbl_mgt *tbl_mgt = (struct nbl_hash_tbl_mgt *)priv;
+ struct nbl_hash_entry_node *hash_node;
+ struct hlist_head *head;
+ void *data = NULL;
+ u32 hash_val;
+ u16 key_size;
+
+ key_size = tbl_mgt->tbl_key.key_size;
+ hash_val = nbl_common_calc_hash_key(key, key_size,
+ tbl_mgt->tbl_key.bucket_size);
+ head = tbl_mgt->hash + hash_val;
+
+ if (tbl_mgt->tbl_key.lock_need)
+ mutex_lock(&tbl_mgt->lock);
+
+ hlist_for_each_entry(hash_node, head, node)
+ if (!memcmp(hash_node->key, key, key_size)) {
+ data = hash_node->data;
+ break;
+ }
+
+ if (tbl_mgt->tbl_key.lock_need)
+ mutex_unlock(&tbl_mgt->lock);
+
+ return data;
+}
+
+static void nbl_common_remove_hash_node(struct nbl_hash_tbl_mgt *tbl_mgt,
+ struct nbl_hash_entry_node *hash_node)
+{
+ hlist_del(&hash_node->node);
+ devm_kfree(tbl_mgt->tbl_key.dev, hash_node->key);
+ devm_kfree(tbl_mgt->tbl_key.dev, hash_node->data);
+ devm_kfree(tbl_mgt->tbl_key.dev, hash_node);
+ tbl_mgt->node_num--;
+}
+
+/*
+ * free a hash node
+ */
+void nbl_common_free_hash_node(void *priv, void *key)
+{
+ struct nbl_hash_tbl_mgt *tbl_mgt = (struct nbl_hash_tbl_mgt *)priv;
+ struct nbl_hash_entry_node *hash_node;
+ struct hlist_head *head;
+ u32 hash_val;
+ u16 key_size;
+
+ key_size = tbl_mgt->tbl_key.key_size;
+ hash_val = nbl_common_calc_hash_key(key, key_size,
+ tbl_mgt->tbl_key.bucket_size);
+ head = tbl_mgt->hash + hash_val;
+
+ if (tbl_mgt->tbl_key.lock_need)
+ mutex_lock(&tbl_mgt->lock);
+
+ hlist_for_each_entry(hash_node, head, node)
+ if (!memcmp(hash_node->key, key, key_size))
+ break;
+
+ if (hash_node)
+ nbl_common_remove_hash_node(tbl_mgt, hash_node);
+
+ if (tbl_mgt->tbl_key.lock_need)
+ mutex_unlock(&tbl_mgt->lock);
+}
+
+void nbl_common_remove_hash_table(void *priv, struct nbl_hash_tbl_del_key *key)
+{
+ struct nbl_hash_tbl_mgt *tbl_mgt = (struct nbl_hash_tbl_mgt *)priv;
+ struct nbl_hash_entry_node *hash_node;
+ struct hlist_node *safe_node;
+ struct hlist_head *head;
+ struct device *dev;
+ u32 i;
+
+ if (!priv)
+ return;
+
+ if (tbl_mgt->tbl_key.lock_need)
+ mutex_lock(&tbl_mgt->lock);
+
+ for (i = 0; i < tbl_mgt->tbl_key.bucket_size; i++) {
+ head = tbl_mgt->hash + i;
+ hlist_for_each_entry_safe(hash_node, safe_node, head, node) {
+ if (key && key->action_func)
+ key->action_func(key->action_priv,
+ hash_node->key,
+ hash_node->data);
+ nbl_common_remove_hash_node(tbl_mgt, hash_node);
+ }
+ }
+
+ devm_kfree(tbl_mgt->tbl_key.dev, tbl_mgt->hash);
+
+ if (tbl_mgt->tbl_key.lock_need)
+ mutex_unlock(&tbl_mgt->lock);
+
+ dev = tbl_mgt->tbl_key.dev;
+ devm_kfree(dev, tbl_mgt);
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h
new file mode 100644
index 000000000000..00afd16ebac4
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_COMMON_H_
+#define _NBL_COMMON_H_
+
+#include "nbl_def_common.h"
+
+/* list only need one bucket size */
+#define NBL_HASH_TBL_LIST_BUCKET_SIZE 1
+
+struct nbl_hash_tbl_mgt {
+ struct nbl_hash_tbl_key tbl_key;
+ struct hlist_head *hash;
+ struct mutex lock; /* support multi thread */
+ u16 node_num;
+};
+
+/* it used for y_axis no necessay */
+struct nbl_hash_entry_node {
+ struct hlist_node node;
+ void *key;
+ void *data;
+};
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
index 9561860c161c..23a2111104c5 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
@@ -6,7 +6,136 @@
#include "nbl_hw_leonis.h"
+static void nbl_hw_update_mailbox_queue_tail_ptr(void *priv, u16 tail_ptr,
+ u8 txrx)
+{
+ /* local_qid 0 and 1 denote rx and tx queue respectively */
+ u32 local_qid = txrx;
+ u32 value = ((u32)tail_ptr << 16) | local_qid;
+
+ /* wmb for doorbell */
+ wmb();
+ nbl_mbx_wr32(priv, NBL_MAILBOX_NOTIFY_ADDR, value);
+}
+
+static void nbl_hw_config_mailbox_rxq(void *priv, dma_addr_t dma_addr,
+ int size_bwid)
+{
+ struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_rx_table = { 0 };
+
+ qinfo_cfg_rx_table.queue_rst = 1;
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_rx_table,
+ sizeof(qinfo_cfg_rx_table));
+
+ qinfo_cfg_rx_table.queue_base_addr_l = (u32)(dma_addr & 0xFFFFFFFF);
+ qinfo_cfg_rx_table.queue_base_addr_h = (u32)(dma_addr >> 32);
+ qinfo_cfg_rx_table.queue_size_bwind = (u32)size_bwid;
+ qinfo_cfg_rx_table.queue_rst = 0;
+ qinfo_cfg_rx_table.queue_en = 1;
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_rx_table,
+ sizeof(qinfo_cfg_rx_table));
+}
+
+static void nbl_hw_config_mailbox_txq(void *priv, dma_addr_t dma_addr,
+ int size_bwid)
+{
+ struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tx_table = { 0 };
+
+ qinfo_cfg_tx_table.queue_rst = 1;
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_tx_table,
+ sizeof(qinfo_cfg_tx_table));
+
+ qinfo_cfg_tx_table.queue_base_addr_l = (u32)(dma_addr & 0xFFFFFFFF);
+ qinfo_cfg_tx_table.queue_base_addr_h = (u32)(dma_addr >> 32);
+ qinfo_cfg_tx_table.queue_size_bwind = (u32)size_bwid;
+ qinfo_cfg_tx_table.queue_rst = 0;
+ qinfo_cfg_tx_table.queue_en = 1;
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_tx_table,
+ sizeof(qinfo_cfg_tx_table));
+}
+
+static void nbl_hw_stop_mailbox_rxq(void *priv)
+{
+ struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_rx_table = { 0 };
+
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_rx_table,
+ sizeof(qinfo_cfg_rx_table));
+}
+
+static void nbl_hw_stop_mailbox_txq(void *priv)
+{
+ struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tx_table = { 0 };
+
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_tx_table,
+ sizeof(qinfo_cfg_tx_table));
+}
+
+static u16 nbl_hw_get_mailbox_rx_tail_ptr(void *priv)
+{
+ struct nbl_mailbox_qinfo_cfg_dbg_tbl cfg_dbg_tbl = { 0 };
+
+ nbl_hw_read_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_DBG_TABLE_ADDR,
+ (u8 *)&cfg_dbg_tbl, sizeof(cfg_dbg_tbl));
+ return cfg_dbg_tbl.rx_tail_ptr;
+}
+
+static bool nbl_hw_check_mailbox_dma_err(void *priv, bool tx)
+{
+ struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tbl = { 0 };
+ u64 addr;
+
+ if (tx)
+ addr = NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR;
+ else
+ addr = NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR;
+
+ nbl_hw_read_mbx_regs(priv, addr, (u8 *)&qinfo_cfg_tbl,
+ sizeof(qinfo_cfg_tbl));
+ return !!qinfo_cfg_tbl.dif_err;
+}
+
+static u32 nbl_hw_get_host_pf_mask(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ u32 data;
+
+ nbl_hw_rd_regs(hw_mgt, NBL_PCIE_HOST_K_PF_MASK_REG, (u8 *)&data,
+ sizeof(data));
+ return data;
+}
+
+static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus,
+ u16 devid, u16 function)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_mailbox_qinfo_map_table mb_qinfo_map;
+
+ memset(&mb_qinfo_map, 0, sizeof(mb_qinfo_map));
+ mb_qinfo_map.function = function;
+ mb_qinfo_map.devid = devid;
+ mb_qinfo_map.bus = bus;
+ mb_qinfo_map.msix_idx_valid = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_MAILBOX_QINFO_MAP_REG_ARR(func_id),
+ (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map));
+}
+
static struct nbl_hw_ops hw_ops = {
+ .update_mailbox_queue_tail_ptr = nbl_hw_update_mailbox_queue_tail_ptr,
+ .config_mailbox_rxq = nbl_hw_config_mailbox_rxq,
+ .config_mailbox_txq = nbl_hw_config_mailbox_txq,
+ .stop_mailbox_rxq = nbl_hw_stop_mailbox_rxq,
+ .stop_mailbox_txq = nbl_hw_stop_mailbox_txq,
+ .get_mailbox_rx_tail_ptr = nbl_hw_get_mailbox_rx_tail_ptr,
+ .check_mailbox_dma_err = nbl_hw_check_mailbox_dma_err,
+ .get_host_pf_mask = nbl_hw_get_host_pf_mask,
+ .cfg_mailbox_qinfo = nbl_hw_cfg_mailbox_qinfo,
+
};
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
index 9fdc1429a4ad..d0ac0e08c5af 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
@@ -13,6 +13,38 @@
#define NBL_CHAN_OPS_TBL_TO_OPS(chan_ops_tbl) ((chan_ops_tbl)->ops)
#define NBL_CHAN_OPS_TBL_TO_PRIV(chan_ops_tbl) ((chan_ops_tbl)->priv)
+#define NBL_CHAN_SEND(chan_send, dst_id, mesg_type, argument, arg_length,\
+ response, resp_length, need_ack) \
+do { \
+ typeof(chan_send) *__chan_send = &(chan_send); \
+ __chan_send->dstid = (dst_id); \
+ __chan_send->msg_type = (mesg_type); \
+ __chan_send->arg = (argument); \
+ __chan_send->arg_len = (arg_length); \
+ __chan_send->resp = (response); \
+ __chan_send->resp_len = (resp_length); \
+ __chan_send->ack = (need_ack); \
+} while (0)
+
+#define NBL_CHAN_ACK(chan_ack, dst_id, mesg_type, msg_id, err_code, ack_data, \
+ data_length) \
+do { \
+ typeof(chan_ack) *__chan_ack = &(chan_ack); \
+ __chan_ack->dstid = (dst_id); \
+ __chan_ack->msg_type = (mesg_type); \
+ __chan_ack->msgid = (msg_id); \
+ __chan_ack->err = (err_code); \
+ __chan_ack->data = (ack_data); \
+ __chan_ack->data_len = (data_length); \
+} while (0)
+
+typedef void (*nbl_chan_resp)(void *, u16, u16, void *, u32);
+
+enum {
+ NBL_CHAN_RESP_OK,
+ NBL_CHAN_RESP_ERR,
+};
+
enum nbl_chan_msg_type {
NBL_CHAN_MSG_ACK,
NBL_CHAN_MSG_ADD_MACVLAN,
@@ -272,6 +304,15 @@ enum nbl_chan_msg_type {
NBL_CHAN_MSG_MAX,
};
+#define NBL_CHANNEL_FREEZE_FAILED_CNT 3
+
+enum nbl_chan_state {
+ NBL_CHAN_INTERRUPT_READY,
+ NBL_CHAN_RESETTING,
+ NBL_CHAN_ABNORMAL,
+ NBL_CHAN_STATE_NBITS
+};
+
struct nbl_chan_param_register_net_info {
u16 pf_bdf;
u64 vf_bar_start;
@@ -351,12 +392,82 @@ struct nbl_chan_param_set_rxfh_indir {
u32 indir[NBL_RXFH_INDIR_MAX_SIZE];
};
+struct nbl_register_net_param {
+ u16 pf_bdf;
+ u64 pf_bar_start;
+};
+
+struct nbl_register_net_result {
+ u16 tx_queue_num;
+ u16 rx_queue_num;
+ u16 queue_size;
+ u16 rdma_enable;
+
+ u64 hw_features;
+ u64 features;
+
+ u16 max_mtu;
+ u16 queue_offset;
+
+ u8 mac[ETH_ALEN];
+ u16 vlan_proto;
+ u16 vlan_tci;
+ u32 rate;
+ bool trusted;
+
+ u64 vlan_features;
+ u64 hw_enc_features;
+};
+
+struct nbl_board_port_info {
+ u8 eth_num;
+ u8 eth_speed;
+ u8 p4_version;
+ u8 rsv[5];
+};
+
+struct nbl_chan_send_info {
+ void *arg;
+ size_t arg_len;
+ void *resp;
+ size_t resp_len;
+ u16 dstid;
+ u16 msg_type;
+ u16 ack;
+ u16 ack_len;
+};
+
+struct nbl_chan_ack_info {
+ void *data;
+ int err;
+ u32 data_len;
+ u16 dstid;
+ u16 msg_type;
+ u16 msgid;
+};
+
enum nbl_channel_type {
NBL_CHAN_TYPE_MAILBOX,
NBL_CHAN_TYPE_MAX
};
struct nbl_channel_ops {
+ int (*send_msg)(void *priv, struct nbl_chan_send_info *chan_send);
+ int (*send_ack)(void *priv, struct nbl_chan_ack_info *chan_ack);
+ int (*register_msg)(void *priv, u16 msg_type, nbl_chan_resp func,
+ void *callback_priv);
+ void (*unregister_msg)(void *priv, u16 msg_type);
+ int (*cfg_chan_qinfo_map_table)(void *priv, u8 chan_type);
+ bool (*check_queue_exist)(void *priv, u8 chan_type);
+ int (*setup_queue)(void *priv, u8 chan_type);
+ int (*teardown_queue)(void *priv, u8 chan_type);
+ void (*clean_queue_subtask)(void *priv, u8 chan_type);
+ int (*setup_keepalive)(void *priv, u16 dest_id, u8 chan_type);
+ void (*remove_keepalive)(void *priv, u8 chan_type);
+ void (*register_chan_task)(void *priv, u8 chan_type,
+ struct work_struct *task);
+ void (*set_queue_state)(void *priv, enum nbl_chan_state state,
+ u8 chan_type, u8 set);
};
struct nbl_channel_ops_tbl {
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
index 7b71f6075ccf..f62623ac8b76 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
@@ -70,4 +70,78 @@ struct nbl_common_info {
u8 is_ctrl;
};
+struct nbl_hash_tbl_key {
+ struct device *dev;
+ u16 key_size;
+ u16 data_size; /* no include key or node member */
+ u16 bucket_size;
+ u8 lock_need; /* true: support multi thread operation */
+ u8 resv;
+};
+
+#define NBL_HASH_TBL_KEY_INIT(key, dev_arg, key_size_arg, data_size_arg,\
+ bucket_size_arg, lock_need_args) \
+do { \
+ typeof(key) __key = key; \
+ __key->dev = dev_arg; \
+ __key->key_size = key_size_arg; \
+ __key->data_size = data_size_arg; \
+ __key->bucket_size = bucket_size_arg; \
+ __key->lock_need = lock_need_args; \
+ __key->resv = 0; \
+} while (0)
+
+enum nbl_hash_tbl_op_type {
+ NBL_HASH_TBL_OP_SHOW = 0,
+ NBL_HASH_TBL_OP_DELETE,
+};
+
+struct nbl_hash_tbl_del_key {
+ void *action_priv;
+ void (*action_func)(void *priv, void *key, void *data);
+};
+
+#define NBL_HASH_TBL_DEL_KEY_INIT(key, priv_arg, act_func_arg) \
+do { \
+ typeof(key) __key = key; \
+ __key->action_priv = priv_arg; \
+ __key->action_func = act_func_arg; \
+} while (0)
+
+struct nbl_hash_tbl_scan_key {
+ enum nbl_hash_tbl_op_type op_type;
+ void *match_condition;
+ /* match ret value must be 0 if the node accord with the condition */
+ int (*match_func)(void *condition, void *key, void *data);
+ void *action_priv;
+ void (*action_func)(void *priv, void *key, void *data);
+};
+
+#define NBL_HASH_TBL_SCAN_KEY_INIT(key, op_type_arg, con_arg, match_func_arg,\
+ priv_arg, act_func_arg) \
+do { \
+ typeof(key) __key = key; \
+ __key->op_type = op_type_arg; \
+ __key->match_condition = con_arg; \
+ __key->match_func = match_func_arg; \
+ __key->action_priv = priv_arg; \
+ __key->action_func = act_func_arg; \
+} while (0)
+
+void nbl_common_queue_work(struct work_struct *task, bool ctrl_task);
+void nbl_common_release_task(struct work_struct *task);
+void nbl_common_alloc_task(struct work_struct *task, void *func);
+void nbl_common_flush_task(struct work_struct *task);
+
+void nbl_common_destroy_wq(void);
+int nbl_common_create_wq(void);
+u32 nbl_common_pf_id_subtraction_mgtpf_id(struct nbl_common_info *common,
+ u32 pf_id);
+
+void *nbl_common_init_hash_table(struct nbl_hash_tbl_key *key);
+void nbl_common_remove_hash_table(void *priv, struct nbl_hash_tbl_del_key *key);
+int nbl_common_alloc_hash_node(void *priv, void *key, void *data,
+ void **out_data);
+void *nbl_common_get_hash_node(void *priv, void *key);
+void nbl_common_free_hash_node(void *priv, void *key);
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
index 6ac72e26ccd6..977f2bba3d65 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
@@ -10,6 +10,37 @@
#include "nbl_include.h"
struct nbl_hw_ops {
+ void (*configure_msix_map)(void *priv, u16 func_id, bool valid,
+ dma_addr_t dma_addr, u8 bus, u8 devid,
+ u8 function);
+ void (*configure_msix_info)(void *priv, u16 func_id, bool valid,
+ u16 interrupt_id, u8 bus, u8 devid,
+ u8 function, bool net_msix_mask_en);
+ void (*update_mailbox_queue_tail_ptr)(void *priv, u16 tail_ptr,
+ u8 txrx);
+ void (*config_mailbox_rxq)(void *priv, dma_addr_t dma_addr,
+ int size_bwid);
+ void (*config_mailbox_txq)(void *priv, dma_addr_t dma_addr,
+ int size_bwid);
+ void (*stop_mailbox_rxq)(void *priv);
+ void (*stop_mailbox_txq)(void *priv);
+ u16 (*get_mailbox_rx_tail_ptr)(void *priv);
+ bool (*check_mailbox_dma_err)(void *priv, bool tx);
+ u32 (*get_host_pf_mask)(void *priv);
+ u32 (*get_real_bus)(void *priv);
+ u64 (*get_pf_bar_addr)(void *priv, u16 func_id);
+
+ void (*cfg_mailbox_qinfo)(void *priv, u16 func_id, u16 bus, u16 devid,
+ u16 function);
+ void (*enable_mailbox_irq)(void *priv, u16 func_id, bool enable_msix,
+ u16 global_vec_id);
+ void (*enable_msix_irq)(void *priv, u16 global_vec_id);
+ u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id,
+ u32 *irq_data);
+ u32 (*get_fw_eth_num)(void *priv);
+ u32 (*get_fw_eth_map)(void *priv);
+ void (*get_board_info)(void *priv, struct nbl_board_port_info *board);
+ u32 (*get_quirks)(void *priv);
};
struct nbl_hw_ops_tbl {
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index 7999671fcb26..375f7d530012 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -12,6 +12,12 @@
/* ------ Basic definitions ------- */
#define NBL_DRIVER_NAME "nbl_core"
+#define NBL_MAX_PF 8
+#define NBL_NEXT_ID(id, max) \
+ ({ \
+ typeof(id) _id = (id); \
+ ((_id) == (max) ? 0 : (_id) + 1); \
+ })
enum nbl_product_type {
NBL_LEONIS_TYPE,
NBL_PRODUCT_MAX,
@@ -48,4 +54,13 @@ struct nbl_txrx_queue_param {
u16 rxcsum;
};
+struct nbl_vnet_queue_info_param {
+ u32 function_id;
+ u32 device_id;
+ u32 bus_id;
+ u32 msix_idx;
+ u32 msix_idx_valid;
+ u32 valid;
+};
+
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 05/15] net/nebula-matrix: add resource layer common part implementation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (3 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 04/15] net/nebula-matrix: add channel layer implementation and the corresponding hw interfaces illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 06/15] net/nebula-matrix: add intr resource implementation illusion.wang
` (10 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
The Resource layer processes the entries/data of various modules within
the processing chip to accomplish specific entry management operations,
this describes the module business capabilities of the chip and the data
it manages.
The resource layer comprises the following sub-modules: common,
interrupt, txrx, queue, and vsi.
This patch provides the common part, including the conversion
relationships among vsi_id, func_id, eth_id, and pf_id. These
relationships may be utilized in the upper layer or the resource layer.
Additionally, it offers the register_net/unregister_net interfaces.
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/Makefile | 1 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 84 ++++
.../nbl_hw_leonis/nbl_resource_leonis.c | 376 ++++++++++++++++++
.../nbl_hw_leonis/nbl_resource_leonis.h | 1 +
.../nebula-matrix/nbl/nbl_hw/nbl_resource.c | 299 ++++++++++++++
.../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 101 +++++
.../nbl/nbl_include/nbl_def_common.h | 17 +
.../nbl/nbl_include/nbl_def_resource.h | 7 +
.../nbl/nbl_include/nbl_include.h | 44 ++
9 files changed, 930 insertions(+)
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
index db646a54877b..8bf3ca9c6862 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -9,6 +9,7 @@ nbl_core-objs += nbl_common/nbl_common.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \
nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \
+ nbl_hw/nbl_resource.o \
nbl_core/nbl_dispatch.o \
nbl_core/nbl_service.o \
nbl_core/nbl_dev.o \
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
index 23a2111104c5..63d01f790af8 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
@@ -5,6 +5,19 @@
*/
#include "nbl_hw_leonis.h"
+static u32 nbl_hw_get_quirks(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = priv;
+ u32 quirks;
+
+ nbl_hw_read_mbx_regs(hw_mgt, NBL_LEONIS_QUIRKS_OFFSET, (u8 *)&quirks,
+ sizeof(u32));
+
+ if (quirks == NBL_LEONIS_ILLEGAL_REG_VALUE)
+ return 0;
+
+ return quirks;
+}
static void nbl_hw_update_mailbox_queue_tail_ptr(void *priv, u16 tail_ptr,
u8 txrx)
@@ -110,6 +123,37 @@ static u32 nbl_hw_get_host_pf_mask(void *priv)
return data;
}
+static u32 nbl_hw_get_real_bus(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ u32 data;
+
+ data = nbl_hw_rd32(hw_mgt, NBL_PCIE_HOST_TL_CFG_BUSDEV);
+ return data >> 5;
+}
+
+static u64 nbl_hw_get_pf_bar_addr(void *priv, u16 func_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ u64 addr;
+ u32 val;
+ u32 selector;
+
+ selector = NBL_LB_PF_CONFIGSPACE_SELECT_OFFSET +
+ func_id * NBL_LB_PF_CONFIGSPACE_SELECT_STRIDE;
+ nbl_hw_wr32(hw_mgt, NBL_LB_PCIEX16_TOP_AHB, selector);
+
+ val = nbl_hw_rd32(hw_mgt,
+ NBL_LB_PF_CONFIGSPACE_BASE_ADDR + PCI_BASE_ADDRESS_0);
+ addr = (u64)(val & PCI_BASE_ADDRESS_MEM_MASK);
+
+ val = nbl_hw_rd32(hw_mgt, NBL_LB_PF_CONFIGSPACE_BASE_ADDR +
+ PCI_BASE_ADDRESS_0 + 4);
+ addr |= ((u64)val << 32);
+
+ return addr;
+}
+
static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus,
u16 devid, u16 function)
{
@@ -125,6 +169,39 @@ static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus,
(u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map));
}
+static void nbl_hw_get_board_info(void *priv,
+ struct nbl_board_port_info *board_info)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ union nbl_fw_board_cfg_dw3 dw3 = { .info = { 0 } };
+
+ nbl_hw_read_mbx_regs(hw_mgt, NBL_FW_BOARD_DW3_OFFSET, (u8 *)&dw3,
+ sizeof(dw3));
+ board_info->eth_num = dw3.info.port_num;
+ board_info->eth_speed = dw3.info.port_speed;
+ board_info->p4_version = dw3.info.p4_version;
+}
+
+static u32 nbl_hw_get_fw_eth_num(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ union nbl_fw_board_cfg_dw3 dw3 = { .info = { 0 } };
+
+ nbl_hw_read_mbx_regs(hw_mgt, NBL_FW_BOARD_DW3_OFFSET, (u8 *)&dw3,
+ sizeof(dw3));
+ return dw3.info.port_num;
+}
+
+static u32 nbl_hw_get_fw_eth_map(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ union nbl_fw_board_cfg_dw6 dw6 = { .info = { 0 } };
+
+ nbl_hw_read_mbx_regs(hw_mgt, NBL_FW_BOARD_DW6_OFFSET, (u8 *)&dw6,
+ sizeof(dw6));
+ return dw6.info.eth_bitmap;
+}
+
static struct nbl_hw_ops hw_ops = {
.update_mailbox_queue_tail_ptr = nbl_hw_update_mailbox_queue_tail_ptr,
.config_mailbox_rxq = nbl_hw_config_mailbox_rxq,
@@ -134,8 +211,15 @@ static struct nbl_hw_ops hw_ops = {
.get_mailbox_rx_tail_ptr = nbl_hw_get_mailbox_rx_tail_ptr,
.check_mailbox_dma_err = nbl_hw_check_mailbox_dma_err,
.get_host_pf_mask = nbl_hw_get_host_pf_mask,
+ .get_real_bus = nbl_hw_get_real_bus,
+ .get_pf_bar_addr = nbl_hw_get_pf_bar_addr,
+
.cfg_mailbox_qinfo = nbl_hw_cfg_mailbox_qinfo,
+ .get_fw_eth_num = nbl_hw_get_fw_eth_num,
+ .get_fw_eth_map = nbl_hw_get_fw_eth_map,
+ .get_board_info = nbl_hw_get_board_info,
+ .get_quirks = nbl_hw_get_quirks,
};
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
index d08ab16ed493..308d675478b6 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
@@ -7,12 +7,167 @@
#include <linux/if_vlan.h>
#include "nbl_resource_leonis.h"
+static int nbl_res_get_queue_num(void *priv, u16 func_id, u16 *tx_queue_num,
+ u16 *rx_queue_num);
static void nbl_res_setup_common_ops(struct nbl_resource_mgt *res_mgt)
{
+ res_mgt->common_ops.get_queue_num = nbl_res_get_queue_num;
+}
+
+static u32 nbl_res_get_pfvf_queue_num(struct nbl_resource_mgt *res_mgt,
+ int pfid, int vfid)
+{
+ struct nbl_resource_info *res_info = NBL_RES_MGT_TO_RES_INFO(res_mgt);
+ struct nbl_net_ring_num_info *num_info = &res_info->net_ring_num_info;
+ u16 func_id = nbl_res_pfvfid_to_func_id(res_mgt, pfid, vfid);
+ u32 queue_num = 0;
+
+ if (num_info->net_max_qp_num[func_id] != 0)
+ queue_num = num_info->net_max_qp_num[func_id];
+ else
+ queue_num = num_info->pf_def_max_net_qp_num;
+
+ if (queue_num > NBL_MAX_TXRX_QUEUE_PER_FUNC) {
+ nbl_warn(NBL_RES_MGT_TO_COMMON(res_mgt),
+ "Invalid queue num %u for func %d, use default",
+ queue_num, func_id);
+ queue_num = NBL_DEFAULT_PF_HW_QUEUE_NUM;
+ }
+
+ return queue_num;
+}
+
+static int nbl_res_get_queue_num(void *priv, u16 func_id, u16 *tx_queue_num,
+ u16 *rx_queue_num)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)(priv);
+ int pfid, vfid;
+
+ nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pfid, &vfid);
+
+ *tx_queue_num = nbl_res_get_pfvf_queue_num(res_mgt, pfid, vfid);
+ *rx_queue_num = nbl_res_get_pfvf_queue_num(res_mgt, pfid, vfid);
+
+ return 0;
+}
+
+static int
+nbl_res_save_bar_info(struct nbl_resource_mgt *res_mgt, u16 func_id,
+ struct nbl_register_net_param *register_param)
+{
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_sriov_info *sriov_info =
+ &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt)[func_id];
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ u64 pf_bar_start;
+
+ if (func_id < NBL_RES_MGT_TO_PF_NUM(res_mgt)) {
+ pf_bar_start = hw_ops->get_pf_bar_addr(p, func_id);
+ sriov_info->pf_bar_start = pf_bar_start;
+ dev_info(dev, "sriov_info, pf_bar_start:%llx\n",
+ sriov_info->pf_bar_start);
+ }
+ return 0;
+}
+
+static u32 nbl_res_get_quirks(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+
+ return hw_ops->get_quirks(NBL_RES_MGT_TO_HW_PRIV(res_mgt));
+}
+
+static int nbl_res_register_net(void *priv, u16 func_id,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ netdev_features_t csumo_features = 0;
+ netdev_features_t tso_features = 0;
+ netdev_features_t pf_features = 0;
+ netdev_features_t vlano_features = 0;
+ u16 tx_queue_num, rx_queue_num;
+ u32 quirks;
+ int ret = 0;
+
+ pf_features = NBL_FEATURE(NETIF_F_NTUPLE);
+ register_result->trusted = 1;
+
+ quirks = nbl_res_get_quirks(res_mgt);
+ if (!(quirks & BIT(NBL_QUIRKS_NO_TOE))) {
+ csumo_features = NBL_FEATURE(NETIF_F_RXCSUM) |
+ NBL_FEATURE(NETIF_F_IP_CSUM) |
+ NBL_FEATURE(NETIF_F_IPV6_CSUM);
+ tso_features = NBL_FEATURE(NETIF_F_TSO) |
+ NBL_FEATURE(NETIF_F_TSO6) |
+ NBL_FEATURE(NETIF_F_GSO_UDP_L4);
+ }
+
+ vlano_features = NBL_FEATURE(NETIF_F_HW_VLAN_CTAG_TX) |
+ NBL_FEATURE(NETIF_F_HW_VLAN_CTAG_RX) |
+ NBL_FEATURE(NETIF_F_HW_VLAN_STAG_TX) |
+ NBL_FEATURE(NETIF_F_HW_VLAN_STAG_RX);
+
+ register_result->hw_features |=
+ pf_features | csumo_features | tso_features | vlano_features |
+ NBL_FEATURE(NETIF_F_SG) | NBL_FEATURE(NETIF_F_RXHASH);
+
+ register_result->features |= register_result->hw_features;
+
+ register_result->vlan_features = register_result->features;
+
+ register_result->max_mtu = NBL_MAX_JUMBO_FRAME_SIZE - NBL_PKT_HDR_PAD;
+
+ register_result->vlan_proto = 0;
+ register_result->vlan_tci = 0;
+ register_result->rate = 0;
+
+ nbl_res_get_queue_num(res_mgt, func_id, &tx_queue_num, &rx_queue_num);
+ register_result->tx_queue_num = tx_queue_num;
+ register_result->rx_queue_num = rx_queue_num;
+ register_result->queue_size = NBL_DEFAULT_DESC_NUM;
+
+ ret = nbl_res_save_bar_info(res_mgt, func_id, register_param);
+ return ret;
+}
+
+static int nbl_res_unregister_net(void *priv, u16 func_id)
+{
+ return 0;
+}
+
+static u16 nbl_res_get_vsi_id(void *priv, u16 func_id, u16 type)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+
+ return nbl_res_func_id_to_vsi_id(res_mgt, func_id, type);
+}
+
+static void nbl_res_get_eth_id(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id,
+ u8 *logic_eth_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt);
+ u16 pf_id = nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id);
+
+ *eth_mode = eth_info->eth_num;
+ if (pf_id < eth_info->eth_num) {
+ *eth_id = eth_info->eth_id[pf_id];
+ *logic_eth_id = pf_id;
+ /* if pf_id > eth_num, use eth_id 0 */
+ } else {
+ *eth_id = eth_info->eth_id[0];
+ *logic_eth_id = 0;
+ }
}
static struct nbl_resource_ops res_ops = {
+ .register_net = nbl_res_register_net,
+ .unregister_net = nbl_res_unregister_net,
+ .get_vsi_id = nbl_res_get_vsi_id,
+ .get_eth_id = nbl_res_get_eth_id,
};
static struct nbl_res_product_ops product_ops = {
@@ -76,14 +231,235 @@ static int nbl_res_setup_ops(struct device *dev,
return 0;
}
+static int nbl_res_ctrl_dev_setup_eth_info(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct nbl_eth_info *eth_info;
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ u32 eth_num = 0;
+ u32 eth_bitmap, eth_id;
+ int i;
+
+ eth_info = devm_kzalloc(dev, sizeof(struct nbl_eth_info), GFP_KERNEL);
+ if (!eth_info)
+ return -ENOMEM;
+
+ res_mgt->resource_info->eth_info = eth_info;
+
+ eth_info->eth_num =
+ (u8)hw_ops->get_fw_eth_num(NBL_RES_MGT_TO_HW_PRIV(res_mgt));
+ eth_bitmap = hw_ops->get_fw_eth_map(NBL_RES_MGT_TO_HW_PRIV(res_mgt));
+ /* for 2 eth port board, the eth_id is 0, 2 */
+ for (i = 0; i < NBL_MAX_ETHERNET; i++) {
+ if ((1 << i) & eth_bitmap) {
+ set_bit(i, eth_info->eth_bitmap);
+ eth_info->eth_id[eth_num] = i;
+ eth_info->logic_eth_id[i] = eth_num;
+ eth_num++;
+ }
+ }
+
+ for (i = 0; i < NBL_RES_MGT_TO_PF_NUM(res_mgt); i++) {
+ /* if pf_id <= eth_num, the pf relate corresponding eth_id*/
+ if (i < eth_num) {
+ eth_id = eth_info->eth_id[i];
+ eth_info->pf_bitmap[eth_id] |= BIT(i);
+ }
+ /* if pf_id > eth_num, the pf relate eth 0*/
+ else
+ eth_info->pf_bitmap[0] |= BIT(i);
+ }
+
+ return 0;
+}
+
+static void nbl_res_ctrl_dev_remove_eth_info(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct nbl_eth_info **eth_info = &NBL_RES_MGT_TO_ETH_INFO(res_mgt);
+
+ if (*eth_info) {
+ devm_kfree(dev, *eth_info);
+ *eth_info = NULL;
+ }
+}
+
+static int nbl_res_ctrl_dev_sriov_info_init(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_sriov_info *sriov_info;
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ u16 func_id;
+ u16 function;
+
+ sriov_info = devm_kcalloc(dev, NBL_RES_MGT_TO_PF_NUM(res_mgt),
+ sizeof(struct nbl_sriov_info), GFP_KERNEL);
+ if (!sriov_info)
+ return -ENOMEM;
+
+ res_mgt->resource_info->sriov_info = sriov_info;
+
+ for (func_id = 0; func_id < NBL_RES_MGT_TO_PF_NUM(res_mgt); func_id++) {
+ sriov_info = &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt)[func_id];
+ function = NBL_COMMON_TO_PCI_FUNC_ID(common) + func_id;
+
+ common->hw_bus = (u8)hw_ops->get_real_bus(p);
+ sriov_info->bdf = PCI_DEVID(common->hw_bus,
+ PCI_DEVFN(common->devid, function));
+ }
+ return 0;
+}
+
+static void nbl_res_ctrl_dev_sriov_info_remove(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_sriov_info **sriov_info =
+ &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt);
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+
+ if (!(*sriov_info))
+ return;
+
+ devm_kfree(dev, *sriov_info);
+ *sriov_info = NULL;
+}
+
+static int nbl_res_ctrl_dev_vsi_info_init(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct nbl_vsi_info *vsi_info;
+ struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt);
+ int i;
+
+ vsi_info = devm_kcalloc(dev, NBL_RES_MGT_TO_PF_NUM(res_mgt),
+ sizeof(struct nbl_vsi_info), GFP_KERNEL);
+ if (!vsi_info)
+ return -ENOMEM;
+
+ res_mgt->resource_info->vsi_info = vsi_info;
+ /*
+ * case 1 two port(2pf)
+ * pf0,pf1(NBL_VSI_SERV_PF_DATA_TYPE) vsi is 0,512
+
+ * case 2 four port(4pf)
+ * pf0,pf1,pf2,pf3(NBL_VSI_SERV_PF_DATA_TYPE) vsi is 0,256,512,768
+
+ */
+
+ vsi_info->num = eth_info->eth_num;
+ for (i = 0; i < vsi_info->num; i++) {
+ vsi_info->serv_info[i][NBL_VSI_SERV_PF_DATA_TYPE].base_id =
+ i * NBL_VSI_ID_GAP(vsi_info->num);
+ vsi_info->serv_info[i][NBL_VSI_SERV_PF_DATA_TYPE].num = 1;
+ }
+ return 0;
+}
+
+static void nbl_res_ctrl_dev_remove_vsi_info(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct nbl_vsi_info **vsi_info = &NBL_RES_MGT_TO_VSI_INFO(res_mgt);
+
+ if (!(*vsi_info))
+ return;
+
+ devm_kfree(dev, *vsi_info);
+ *vsi_info = NULL;
+}
+
+static int nbl_res_ring_num_info_init(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_resource_info *resource_info =
+ NBL_RES_MGT_TO_RES_INFO(res_mgt);
+ struct nbl_net_ring_num_info *num_info =
+ &resource_info->net_ring_num_info;
+
+ num_info->pf_def_max_net_qp_num = NBL_DEFAULT_PF_HW_QUEUE_NUM;
+
+ return 0;
+}
+
+static int nbl_res_init_pf_num(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ u32 pf_mask;
+ u32 pf_num = 0;
+ int i;
+
+ pf_mask = hw_ops->get_host_pf_mask(NBL_RES_MGT_TO_HW_PRIV(res_mgt));
+ for (i = 0; i < NBL_MAX_PF_LEONIS; i++) {
+ if (!(pf_mask & (1 << i)))
+ pf_num++;
+ else
+ break;
+ }
+
+ res_mgt->resource_info->max_pf = pf_num;
+
+ if (!pf_num)
+ return -1;
+
+ return 0;
+}
+
+static void nbl_res_init_board_info(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+
+ hw_ops->get_board_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ &res_mgt->resource_info->board_info);
+}
+
static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis)
{
+ struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt;
+
+ nbl_res_ctrl_dev_remove_vsi_info(res_mgt);
+ nbl_res_ctrl_dev_remove_eth_info(res_mgt);
+ nbl_res_ctrl_dev_sriov_info_remove(res_mgt);
}
static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis,
struct nbl_func_caps caps)
{
+ struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ int ret = 0;
+
+ if (caps.has_ctrl) {
+ nbl_res_init_board_info(res_mgt);
+
+ ret = nbl_res_init_pf_num(res_mgt);
+ if (ret) {
+ nbl_err(common, "pf number is illegal");
+ return ret;
+ }
+
+ ret = nbl_res_ctrl_dev_sriov_info_init(res_mgt);
+ if (ret) {
+ nbl_err(common, "Failed to init sr_iov info");
+ return ret;
+ }
+
+ ret = nbl_res_ctrl_dev_setup_eth_info(res_mgt);
+ if (ret)
+ goto start_fail;
+
+ ret = nbl_res_ctrl_dev_vsi_info_init(res_mgt);
+ if (ret)
+ goto start_fail;
+
+ ret = nbl_res_ring_num_info_init(res_mgt);
+ if (ret)
+ goto start_fail;
+ }
return 0;
+
+start_fail:
+ nbl_res_stop(res_mgt_leonis);
+ return ret;
}
int nbl_res_init_leonis(void *p, struct nbl_init_param *param)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
index 80925ff9e680..a0a25a2b71ee 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
@@ -9,4 +9,5 @@
#include "nbl_resource.h"
+#define NBL_MAX_PF_LEONIS 8
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c
new file mode 100644
index 000000000000..f25b51629014
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c
@@ -0,0 +1,299 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include "nbl_resource.h"
+
+static u16 pfvfid_to_vsi_id(void *p, int pfid, int vfid, u16 type)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p;
+ struct nbl_vsi_info *vsi_info = NBL_RES_MGT_TO_VSI_INFO(res_mgt);
+ enum nbl_vsi_serv_type dst_type = NBL_VSI_SERV_PF_DATA_TYPE;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ u16 vsi_id = U16_MAX;
+ int diff;
+
+ diff = nbl_common_pf_id_subtraction_mgtpf_id(common, pfid);
+ if (vfid == U32_MAX || vfid == U16_MAX) {
+ if (diff < vsi_info->num) {
+ nbl_res_pf_dev_vsi_type_to_hw_vsi_type(type, &dst_type);
+ vsi_id = vsi_info->serv_info[diff][dst_type].base_id;
+ }
+ }
+
+ if (vsi_id == U16_MAX)
+ pr_err("convert pfid-vfid %d-%d to vsi_id(%d) failed!\n", pfid,
+ vfid, type);
+
+ return vsi_id;
+}
+
+static u16 func_id_to_vsi_id(void *p, u16 func_id, u16 type)
+{
+ int pfid = U32_MAX;
+ int vfid = U32_MAX;
+
+ nbl_res_func_id_to_pfvfid(p, func_id, &pfid, &vfid);
+ return nbl_res_pfvfid_to_vsi_id(p, pfid, vfid, type);
+}
+
+static u16 vsi_id_to_func_id(void *p, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p;
+ struct nbl_vsi_info *vsi_info = NBL_RES_MGT_TO_VSI_INFO(res_mgt);
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ int i, j;
+ u16 func_id = U16_MAX;
+ bool vsi_find = false;
+
+ for (i = 0; i < vsi_info->num; i++) {
+ for (j = 0; j < NBL_VSI_SERV_MAX_TYPE; j++) {
+ if (vsi_id >= vsi_info->serv_info[i][j].base_id &&
+ (vsi_id < vsi_info->serv_info[i][j].base_id +
+ vsi_info->serv_info[i][j].num)) {
+ vsi_find = true;
+ break;
+ }
+ }
+
+ if (vsi_find)
+ break;
+ }
+
+ if (vsi_find) {
+ /* if pf_id < eth_num */
+ if (j >= NBL_VSI_SERV_PF_DATA_TYPE &&
+ j <= NBL_VSI_SERV_PF_DATA_TYPE)
+ func_id = i + NBL_COMMON_TO_MGT_PF(common);
+ }
+
+ if (func_id == U16_MAX)
+ pr_err("convert vsi_id %d to func_id failed!\n", vsi_id);
+
+ return func_id;
+}
+
+static int vsi_id_to_pf_id(void *p, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_vsi_info *vsi_info = NBL_RES_MGT_TO_VSI_INFO(res_mgt);
+ u32 pf_id = U32_MAX;
+ bool vsi_find = false;
+ int i, j;
+
+ for (i = 0; i < vsi_info->num; i++) {
+ for (j = 0; j < NBL_VSI_SERV_MAX_TYPE; j++)
+ if (vsi_id >= vsi_info->serv_info[i][j].base_id &&
+ (vsi_id < vsi_info->serv_info[i][j].base_id +
+ vsi_info->serv_info[i][j].num)) {
+ vsi_find = true;
+ break;
+ }
+
+ if (vsi_find)
+ break;
+ }
+
+ if (vsi_find) {
+ /* if pf_id < eth_num */
+ if (j >= NBL_VSI_SERV_PF_DATA_TYPE &&
+ j <= NBL_VSI_SERV_PF_DATA_TYPE)
+ pf_id = i + NBL_COMMON_TO_MGT_PF(common);
+ }
+
+ return pf_id;
+}
+
+static int func_id_to_pfvfid(void *p, u16 func_id, int *pfid, int *vfid)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p;
+
+ if (func_id < NBL_RES_MGT_TO_PF_NUM(res_mgt)) {
+ *pfid = func_id;
+ *vfid = U32_MAX;
+ return 0;
+ }
+ return U32_MAX;
+}
+
+static int func_id_to_bdf(void *p, u16 func_id, u8 *bus, u8 *dev, u8 *function)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_sriov_info *sriov_info;
+ int pfid = U32_MAX;
+ int vfid = U32_MAX;
+ int diff;
+ u8 pf_bus, devfn;
+
+ if (nbl_res_func_id_to_pfvfid(p, func_id, &pfid, &vfid))
+ return U32_MAX;
+
+ diff = nbl_common_pf_id_subtraction_mgtpf_id(common, pfid);
+ sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + diff;
+ pf_bus = PCI_BUS_NUM(sriov_info->bdf);
+ devfn = sriov_info->bdf & 0xff;
+ *bus = pf_bus;
+ *dev = PCI_SLOT(devfn);
+ *function = PCI_FUNC(devfn);
+ return 0;
+}
+
+static u16 pfvfid_to_func_id(void *p, int pfid, int vfid)
+{
+ return pfid;
+}
+
+static u64 get_func_bar_base_addr(void *p, u16 func_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_sriov_info *sriov_info;
+ u64 base_addr = 0;
+ int pfid = U32_MAX;
+ int vfid = U32_MAX;
+ int diff;
+
+ if (nbl_res_func_id_to_pfvfid(p, func_id, &pfid, &vfid))
+ return 0;
+
+ diff = nbl_common_pf_id_subtraction_mgtpf_id(common, pfid);
+ sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + diff;
+ if (!sriov_info->pf_bar_start) {
+ nbl_err(common,
+ "Try to get bar addr for func %d, but PF_%d sriov not init",
+ func_id, pfid);
+ return 0;
+ }
+ base_addr = sriov_info->pf_bar_start;
+ nbl_debug(common, "pfid %d vfid %d base_addr %llx\n", pfid, vfid,
+ base_addr);
+ return base_addr;
+}
+
+static u8 vsi_id_to_eth_id(void *p, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p;
+ struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt);
+
+ if (eth_info)
+ return eth_info
+ ->eth_id[nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id)];
+ else
+ return 0;
+}
+
+static u8 eth_id_to_pf_id(void *p, u8 eth_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt);
+ int i;
+ u8 pf_id_offset = 0;
+
+ for_each_set_bit(i, eth_info->eth_bitmap, NBL_MAX_ETHERNET) {
+ if (i == eth_id)
+ break;
+ pf_id_offset++;
+ }
+
+ return pf_id_offset + NBL_COMMON_TO_MGT_PF(common);
+}
+
+int nbl_res_func_id_to_pfvfid(struct nbl_resource_mgt *res_mgt, u16 func_id,
+ int *pfid, int *vfid)
+{
+ if (!res_mgt->common_ops.func_id_to_pfvfid)
+ return func_id_to_pfvfid(res_mgt, func_id, pfid, vfid);
+
+ return res_mgt->common_ops.func_id_to_pfvfid(res_mgt, func_id, pfid,
+ vfid);
+}
+
+u16 nbl_res_pfvfid_to_func_id(struct nbl_resource_mgt *res_mgt, int pfid,
+ int vfid)
+{
+ if (!res_mgt->common_ops.pfvfid_to_func_id)
+ return pfvfid_to_func_id(res_mgt, pfid, vfid);
+
+ return res_mgt->common_ops.pfvfid_to_func_id(res_mgt, pfid, vfid);
+}
+
+u16 nbl_res_pfvfid_to_vsi_id(struct nbl_resource_mgt *res_mgt, int pfid,
+ int vfid, u16 type)
+{
+ if (!res_mgt->common_ops.pfvfid_to_vsi_id)
+ return pfvfid_to_vsi_id(res_mgt, pfid, vfid, type);
+
+ return res_mgt->common_ops.pfvfid_to_vsi_id(res_mgt, pfid, vfid, type);
+}
+
+int nbl_res_func_id_to_bdf(struct nbl_resource_mgt *res_mgt, u16 func_id,
+ u8 *bus, u8 *dev, u8 *function)
+{
+ if (!res_mgt->common_ops.func_id_to_bdf)
+ return func_id_to_bdf(res_mgt, func_id, bus, dev, function);
+
+ return res_mgt->common_ops.func_id_to_bdf(res_mgt, func_id, bus, dev,
+ function);
+}
+
+u16 nbl_res_vsi_id_to_func_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id)
+{
+ if (!res_mgt->common_ops.vsi_id_to_func_id)
+ return vsi_id_to_func_id(res_mgt, vsi_id);
+
+ return res_mgt->common_ops.vsi_id_to_func_id(res_mgt, vsi_id);
+}
+
+int nbl_res_vsi_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id)
+{
+ if (!res_mgt->common_ops.vsi_id_to_pf_id)
+ return vsi_id_to_pf_id(res_mgt, vsi_id);
+
+ return res_mgt->common_ops.vsi_id_to_pf_id(res_mgt, vsi_id);
+}
+
+u16 nbl_res_func_id_to_vsi_id(struct nbl_resource_mgt *res_mgt, u16 func_id,
+ u16 type)
+{
+ if (!res_mgt->common_ops.func_id_to_vsi_id)
+ return func_id_to_vsi_id(res_mgt, func_id, type);
+
+ return res_mgt->common_ops.func_id_to_vsi_id(res_mgt, func_id, type);
+}
+
+u64 nbl_res_get_func_bar_base_addr(struct nbl_resource_mgt *res_mgt,
+ u16 func_id)
+{
+ if (!res_mgt->common_ops.get_func_bar_base_addr)
+ return get_func_bar_base_addr(res_mgt, func_id);
+
+ return res_mgt->common_ops.get_func_bar_base_addr(res_mgt, func_id);
+}
+
+u8 nbl_res_vsi_id_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id)
+{
+ if (!res_mgt->common_ops.vsi_id_to_eth_id)
+ return vsi_id_to_eth_id(res_mgt, vsi_id);
+
+ return res_mgt->common_ops.vsi_id_to_eth_id(res_mgt, vsi_id);
+}
+
+u8 nbl_res_eth_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u8 eth_id)
+{
+ if (!res_mgt->common_ops.eth_id_to_pf_id)
+ return eth_id_to_pf_id(res_mgt, eth_id);
+
+ return res_mgt->common_ops.eth_id_to_pf_id(res_mgt, eth_id);
+}
+
+void nbl_res_pf_dev_vsi_type_to_hw_vsi_type(u16 src_type,
+ enum nbl_vsi_serv_type *dst_type)
+{
+ if (src_type == NBL_VSI_DATA)
+ *dst_type = NBL_VSI_SERV_PF_DATA_TYPE;
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
index a43f4ca79b2b..f6fd151141d5 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
@@ -8,10 +8,23 @@
#define _NBL_RESOURCE_H_
#include "nbl_core.h"
+#include "nbl_hw.h"
#define NBL_RES_MGT_TO_COMMON(res_mgt) ((res_mgt)->common)
#define NBL_RES_MGT_TO_COMMON_OPS(res_mgt) (&((res_mgt)->common_ops))
+#define NBL_RES_MGT_TO_DEV(res_mgt) \
+ NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt))
+#define NBL_RES_MGT_TO_DMA_DEV(res_mgt) \
+ NBL_COMMON_TO_DMA_DEV(NBL_RES_MGT_TO_COMMON(res_mgt))
#define NBL_RES_MGT_TO_RES_INFO(res_mgt) ((res_mgt)->resource_info)
+#define NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) \
+ (NBL_RES_MGT_TO_RES_INFO(res_mgt)->sriov_info)
+#define NBL_RES_MGT_TO_ETH_INFO(res_mgt) \
+ (NBL_RES_MGT_TO_RES_INFO(res_mgt)->eth_info)
+#define NBL_RES_MGT_TO_VSI_INFO(res_mgt) \
+ (NBL_RES_MGT_TO_RES_INFO(res_mgt)->vsi_info)
+#define NBL_RES_MGT_TO_PF_NUM(res_mgt) \
+ (NBL_RES_MGT_TO_RES_INFO(res_mgt)->max_pf)
#define NBL_RES_MGT_TO_HW_OPS_TBL(res_mgt) ((res_mgt)->hw_ops_tbl)
#define NBL_RES_MGT_TO_HW_OPS(res_mgt) (NBL_RES_MGT_TO_HW_OPS_TBL(res_mgt)->ops)
@@ -23,10 +36,79 @@
#define NBL_RES_MGT_TO_CHAN_PRIV(res_mgt) \
(NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->priv)
+#define NBL_MAX_NET_ID NBL_MAX_FUNC
+#define NBL_MAX_JUMBO_FRAME_SIZE (9600)
+#define NBL_PKT_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))
+#define NBL_DEFAULT_DESC_NUM (1024)
+#define NBL_MAX_TXRX_QUEUE_PER_FUNC (256)
+#define NBL_DEFAULT_PF_HW_QUEUE_NUM (16)
+
+/* --------- INFO ---------- */
+struct nbl_sriov_info {
+ unsigned int bdf;
+ u64 pf_bar_start;
+};
+
+struct nbl_eth_info {
+ DECLARE_BITMAP(eth_bitmap, NBL_MAX_ETHERNET);
+ u8 pf_bitmap[NBL_MAX_ETHERNET];
+ u8 eth_num;
+ u8 resv[3];
+ u8 eth_id[NBL_MAX_PF];
+ u8 logic_eth_id[NBL_MAX_PF];
+
+};
+
+enum nbl_vsi_serv_type {
+ NBL_VSI_SERV_PF_DATA_TYPE,
+ NBL_VSI_SERV_MAX_TYPE,
+};
+
+struct nbl_vsi_serv_info {
+ u16 base_id;
+ u16 num;
+};
+
+struct nbl_vsi_info {
+ u16 num;
+ struct nbl_vsi_serv_info serv_info[NBL_MAX_ETHERNET]
+ [NBL_VSI_SERV_MAX_TYPE];
+};
+
+struct nbl_net_ring_num_info {
+ u16 pf_def_max_net_qp_num;
+ u16 net_max_qp_num[NBL_MAX_FUNC];
+};
+
struct nbl_resource_info {
+ /* ctrl-dev owned pfs */
+ DECLARE_BITMAP(func_bitmap, NBL_MAX_FUNC);
+ struct nbl_sriov_info *sriov_info;
+ struct nbl_eth_info *eth_info;
+ struct nbl_vsi_info *vsi_info;
+ u32 base_qid;
+ u32 max_vf_num;
+ struct nbl_net_ring_num_info net_ring_num_info;
+ u8 max_pf;
+ struct nbl_board_port_info board_info;
};
struct nbl_resource_common_ops {
+ u16 (*vsi_id_to_func_id)(void *res_mgt, u16 vsi_id);
+ int (*vsi_id_to_pf_id)(void *res_mgt, u16 vsi_id);
+ u16 (*vsi_id_to_vf_id)(void *res_mgt, u16 vsi_id);
+ u16 (*pfvfid_to_func_id)(void *res_mgt, int pfid, int vfid);
+ u16 (*pfvfid_to_vsi_id)(void *res_mgt, int pfid, int vfid, u16 type);
+ u16 (*func_id_to_vsi_id)(void *res_mgt, u16 func_id, u16 type);
+ int (*func_id_to_pfvfid)(void *res_mgt, u16 func_id, int *pfid,
+ int *vfid);
+ int (*func_id_to_bdf)(void *res_mgt, u16 func_id, u8 *bus, u8 *dev,
+ u8 *function);
+ u64 (*get_func_bar_base_addr)(void *res_mgt, u16 func_id);
+ u8 (*vsi_id_to_eth_id)(void *res_mgt, u16 vsi_id);
+ u8 (*eth_id_to_pf_id)(void *res_mgt, u8 eth_id);
+ int (*get_queue_num)(void *res_mgt, u16 func_id, u16 *tx_queue_num,
+ u16 *rx_queue_num);
};
struct nbl_res_product_ops {
@@ -52,4 +134,23 @@ struct nbl_resource_mgt {
struct nbl_resource_mgt_leonis {
struct nbl_resource_mgt res_mgt;
};
+
+u16 nbl_res_vsi_id_to_func_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id);
+int nbl_res_vsi_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id);
+u16 nbl_res_pfvfid_to_func_id(struct nbl_resource_mgt *res_mgt, int pfid,
+ int vfid);
+u16 nbl_res_pfvfid_to_vsi_id(struct nbl_resource_mgt *res_mgt, int pfid,
+ int vfid, u16 type);
+u16 nbl_res_func_id_to_vsi_id(struct nbl_resource_mgt *res_mgt, u16 func_id,
+ u16 type);
+int nbl_res_func_id_to_pfvfid(struct nbl_resource_mgt *res_mgt, u16 func_id,
+ int *pfid, int *vfid);
+u8 nbl_res_eth_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u8 eth_id);
+int nbl_res_func_id_to_bdf(struct nbl_resource_mgt *res_mgt, u16 func_id,
+ u8 *bus, u8 *dev, u8 *function);
+u64 nbl_res_get_func_bar_base_addr(struct nbl_resource_mgt *res_mgt,
+ u16 func_id);
+u8 nbl_res_vsi_id_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id);
+void nbl_res_pf_dev_vsi_type_to_hw_vsi_type(u16 src_type,
+ enum nbl_vsi_serv_type *dst_type);
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
index f62623ac8b76..fe176ee72128 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
@@ -48,6 +48,23 @@ do { \
#define NBL_COMMON_TO_PCI_FUNC_ID(common) ((common)->function)
#define NBL_COMMON_TO_LOGIC_ETH_ID(common) ((common)->logic_eth_id)
+#define NBL_ONE_ETHERNET_PORT (1)
+#define NBL_TWO_ETHERNET_PORT (2)
+#define NBL_FOUR_ETHERNET_PORT (4)
+#define NBL_DEFAULT_VSI_ID_GAP (1024)
+#define NBL_TWO_ETHERNET_VSI_ID_GAP (512)
+#define NBL_FOUR_ETHERNET_VSI_ID_GAP (256)
+
+#define NBL_VSI_ID_GAP(m) \
+ ({ \
+ typeof(m) _m = (m); \
+ _m == NBL_FOUR_ETHERNET_PORT ? \
+ NBL_FOUR_ETHERNET_VSI_ID_GAP : \
+ (_m == NBL_TWO_ETHERNET_PORT ? \
+ NBL_TWO_ETHERNET_VSI_ID_GAP : \
+ NBL_DEFAULT_VSI_ID_GAP); \
+ })
+
struct nbl_common_info {
struct pci_dev *pdev;
struct device *dev;
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
index 55cacc94888c..ef8ccb5707f5 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
@@ -16,6 +16,13 @@ struct nbl_resource_pt_ops {
};
struct nbl_resource_ops {
+ int (*register_net)(void *priv, u16 func_id,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result);
+ int (*unregister_net)(void *priv, u16 func_id);
+ u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type);
+ void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id,
+ u8 *logic_eth_id);
};
struct nbl_resource_ops_tbl {
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index 375f7d530012..53fbcf43149b 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -18,11 +18,17 @@
typeof(id) _id = (id); \
((_id) == (max) ? 0 : (_id) + 1); \
})
+#define NBL_MAX_FUNC (520)
enum nbl_product_type {
NBL_LEONIS_TYPE,
NBL_PRODUCT_MAX,
};
+enum {
+ NBL_VSI_DATA = 0,
+ NBL_VSI_MAX,
+};
+
struct nbl_func_caps {
u32 has_ctrl:1;
u32 has_net:1;
@@ -63,4 +69,42 @@ struct nbl_vnet_queue_info_param {
u32 valid;
};
+enum {
+ NBL_NETIF_F_SG_BIT, /* Scatter/gather IO. */
+ NBL_NETIF_F_IP_CSUM_BIT, /* csum TCP/UDP over IPv4 */
+ NBL_NETIF_F_HW_CSUM_BIT, /* csum all the packets. */
+ NBL_NETIF_F_IPV6_CSUM_BIT, /* csum TCP/UDP over IPV6 */
+ NBL_NETIF_F_HIGHDMA_BIT, /* DMA to high memory. */
+ NBL_NETIF_F_HW_VLAN_CTAG_TX_BIT, /* Tx VLAN CTAG HW accel */
+ NBL_NETIF_F_HW_VLAN_CTAG_RX_BIT, /* Rx VLAN CTAG HW accel */
+ NBL_NETIF_F_HW_VLAN_CTAG_FILTER_BIT, /* Rx filtering on VLAN CTAG */
+ NBL_NETIF_F_TSO_BIT, /* TCPv4 segmentation */
+ NBL_NETIF_F_GSO_ROBUST_BIT, /* SKB_GSO_DODGY */
+ NBL_NETIF_F_TSO6_BIT, /* TCPv6 segmentation */
+ NBL_NETIF_F_GSO_GRE_BIT, /* GRE with TSO */
+ NBL_NETIF_F_GSO_GRE_CSUM_BIT, /* GRE with csum with TSO */
+ NBL_NETIF_F_GSO_UDP_TUNNEL_BIT, /* UDP TUNNEL with TSO */
+ NBL_NETIF_F_GSO_UDP_TUNNEL_CSUM_BIT, /* UDP TUNNEL with TSO & CSUM */
+ NBL_NETIF_F_GSO_PARTIAL_BIT, /* Only segment inner-most L4
+ * in hardware and all other
+ * headers in software.
+ */
+ NBL_NETIF_F_GSO_UDP_L4_BIT, /* UDP payload GSO (not UFO) */
+ NBL_NETIF_F_SCTP_CRC_BIT, /* SCTP checksum offload */
+ NBL_NETIF_F_NTUPLE_BIT, /* N-tuple filters supported */
+ NBL_NETIF_F_RXHASH_BIT, /* Rx hashing offload */
+ NBL_NETIF_F_RXCSUM_BIT, /* Rx checksumming offload */
+ NBL_NETIF_F_HW_VLAN_STAG_TX_BIT, /* Tx VLAN STAG HW accel */
+ NBL_NETIF_F_HW_VLAN_STAG_RX_BIT, /* Rx VLAN STAG HW accel */
+ NBL_NETIF_F_HW_VLAN_STAG_FILTER_BIT, /* Rx filtering on VLAN STAG */
+ NBL_NETIF_F_HW_TC_BIT, /* Offload TC infrastructure */
+ NBL_FEATURES_COUNT
+};
+
+#define NBL_FEATURE(name) (1 << (NBL_##name##_BIT))
+#define NBL_FEATURE_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1)
+enum nbl_performance_mode {
+ NBL_QUIRKS_NO_TOE,
+ NBL_QUIRKS_UVN_PREFETCH_ALIGN,
+};
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 06/15] net/nebula-matrix: add intr resource implementation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (4 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 05/15] net/nebula-matrix: add resource layer common part implementation illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 07/15] net/nebula-matrix: add queue " illusion.wang
` (9 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
Dynamically allocate and manage MSI-X interrupt vectors, including
network interrupts and other types of interrupts.
Interrupt Mapping Table Management: Maintain the MSI-X mapping table
(msix_map_table) to establish interrupt associations between hardware
and software.
Interrupt Enabling/Disabling:
Support enabling or disabling specific interrupts through hardware
operations.
Interrupt Information Query: Provide interfaces to obtain the
hardware register addresses and data of interrupts.
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/Makefile | 1 +
.../net/ethernet/nebula-matrix/nbl/nbl_core.h | 4 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 95 +++++-
.../nbl_hw_leonis/nbl_resource_leonis.c | 19 ++
.../nebula-matrix/nbl/nbl_hw/nbl_interrupt.c | 314 ++++++++++++++++++
.../nebula-matrix/nbl/nbl_hw/nbl_interrupt.h | 13 +
.../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 271 +++++++++++++++
.../nbl/nbl_include/nbl_def_resource.h | 9 +
.../nbl/nbl_include/nbl_include.h | 139 +++++++-
9 files changed, 856 insertions(+), 9 deletions(-)
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
index 8bf3ca9c6862..d27f0bd7f81b 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -10,6 +10,7 @@ nbl_core-objs += nbl_common/nbl_common.o \
nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \
nbl_hw/nbl_resource.o \
+ nbl_hw/nbl_interrupt.o \
nbl_core/nbl_dispatch.o \
nbl_core/nbl_service.o \
nbl_core/nbl_dev.o \
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
index 9db7c9b4a739..6d4b890a4d84 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
@@ -34,6 +34,10 @@
#define NBL_ADAP_TO_SERV_OPS_TBL(adapter) ((adapter)->intf.service_ops_tbl)
#define NBL_ADAP_TO_DEV_OPS_TBL(adapter) ((adapter)->intf.dev_ops_tbl)
#define NBL_ADAP_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl)
+
+#define NBL_ADAPTER_TO_RES_PT_OPS(adapter) \
+ (&(NBL_ADAP_TO_SERV_OPS_TBL(adapter)->pt_ops))
+
#define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1)
#define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
index 63d01f790af8..baad9e6ad4c9 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
@@ -19,6 +19,94 @@ static u32 nbl_hw_get_quirks(void *priv)
return quirks;
}
+static void nbl_hw_enable_mailbox_irq(void *priv, u16 func_id, bool enable_msix,
+ u16 global_vec_id)
+{
+ struct nbl_mailbox_qinfo_map_table mb_qinfo_map = { 0 };
+
+ nbl_hw_rd_regs(priv, NBL_MAILBOX_QINFO_MAP_REG_ARR(func_id),
+ (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map));
+
+ if (enable_msix) {
+ mb_qinfo_map.msix_idx = global_vec_id;
+ mb_qinfo_map.msix_idx_valid = 1;
+ } else {
+ mb_qinfo_map.msix_idx = 0;
+ mb_qinfo_map.msix_idx_valid = 0;
+ }
+
+ nbl_hw_wr_regs(priv, NBL_MAILBOX_QINFO_MAP_REG_ARR(func_id),
+ (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map));
+}
+
+static void nbl_hw_enable_msix_irq(void *priv, u16 global_vec_id)
+{
+ struct nbl_msix_notify msix_notify = { 0 };
+
+ msix_notify.glb_msix_idx = global_vec_id;
+
+ nbl_hw_wr_regs(priv, NBL_PCOMPLETER_MSIX_NOTIRY_OFFSET,
+ (u8 *)&msix_notify, sizeof(msix_notify));
+}
+
+static u8 __iomem *
+nbl_hw_get_msix_irq_enable_info(void *priv, u16 global_vec_id, u32 *irq_data)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_msix_notify msix_notify = { 0 };
+
+ msix_notify.glb_msix_idx = global_vec_id;
+ memcpy(irq_data, &msix_notify, sizeof(msix_notify));
+
+ return (hw_mgt->hw_addr + NBL_PCOMPLETER_MSIX_NOTIRY_OFFSET);
+}
+
+static void nbl_hw_configure_msix_map(void *priv, u16 func_id, bool valid,
+ dma_addr_t dma_addr, u8 bus, u8 devid,
+ u8 function)
+{
+ struct nbl_function_msix_map function_msix_map = { 0 };
+
+ if (valid) {
+ function_msix_map.msix_map_base_addr = dma_addr;
+ /* use af's bdf, because dma memmory is alloc by af */
+ function_msix_map.function = function;
+ function_msix_map.devid = devid;
+ function_msix_map.bus = bus;
+ function_msix_map.valid = 1;
+ }
+
+ nbl_hw_wr_regs(priv, NBL_PCOMPLETER_FUNCTION_MSIX_MAP_REG_ARR(func_id),
+ (u8 *)&function_msix_map, sizeof(function_msix_map));
+}
+
+static void nbl_hw_configure_msix_info(void *priv, u16 func_id, bool valid,
+ u16 interrupt_id, u8 bus, u8 devid,
+ u8 function, bool msix_mask_en)
+{
+ struct nbl_pcompleter_host_msix_fid_table host_msix_fid_table = { 0 };
+ struct nbl_host_msix_info msix_info = { 0 };
+
+ if (valid) {
+ host_msix_fid_table.vld = 1;
+ host_msix_fid_table.fid = func_id;
+
+ msix_info.intrl_pnum = 0;
+ msix_info.intrl_rate = 0;
+ msix_info.function = function;
+ msix_info.devid = devid;
+ msix_info.bus = bus;
+ msix_info.valid = 1;
+ if (msix_mask_en)
+ msix_info.msix_mask_en = 1;
+ }
+
+ nbl_hw_wr_regs(priv, NBL_PADPT_HOST_MSIX_INFO_REG_ARR(interrupt_id),
+ (u8 *)&msix_info, sizeof(msix_info));
+ nbl_hw_wr_regs(priv, NBL_PCOMPLETER_HOST_MSIX_FID_TABLE(interrupt_id),
+ (u8 *)&host_msix_fid_table, sizeof(host_msix_fid_table));
+}
+
static void nbl_hw_update_mailbox_queue_tail_ptr(void *priv, u16 tail_ptr,
u8 txrx)
{
@@ -203,6 +291,9 @@ static u32 nbl_hw_get_fw_eth_map(void *priv)
}
static struct nbl_hw_ops hw_ops = {
+ .configure_msix_map = nbl_hw_configure_msix_map,
+ .configure_msix_info = nbl_hw_configure_msix_info,
+
.update_mailbox_queue_tail_ptr = nbl_hw_update_mailbox_queue_tail_ptr,
.config_mailbox_rxq = nbl_hw_config_mailbox_rxq,
.config_mailbox_txq = nbl_hw_config_mailbox_txq,
@@ -215,7 +306,9 @@ static struct nbl_hw_ops hw_ops = {
.get_pf_bar_addr = nbl_hw_get_pf_bar_addr,
.cfg_mailbox_qinfo = nbl_hw_cfg_mailbox_qinfo,
-
+ .enable_mailbox_irq = nbl_hw_enable_mailbox_irq,
+ .enable_msix_irq = nbl_hw_enable_msix_irq,
+ .get_msix_irq_enable_info = nbl_hw_get_msix_irq_enable_info,
.get_fw_eth_num = nbl_hw_get_fw_eth_num,
.get_fw_eth_map = nbl_hw_get_fw_eth_map,
.get_board_info = nbl_hw_get_board_info,
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
index 308d675478b6..16e5941aaf76 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
@@ -173,6 +173,7 @@ static struct nbl_resource_ops res_ops = {
static struct nbl_res_product_ops product_ops = {
};
+static bool is_ops_inited;
static int
nbl_res_setup_res_mgt(struct nbl_common_info *common,
struct nbl_resource_mgt_leonis **res_mgt_leonis)
@@ -220,15 +221,28 @@ static int nbl_res_setup_ops(struct device *dev,
struct nbl_resource_ops_tbl **res_ops_tbl,
struct nbl_resource_mgt_leonis *res_mgt_leonis)
{
+ int ret = 0;
+
*res_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_resource_ops_tbl),
GFP_KERNEL);
if (!*res_ops_tbl)
return -ENOMEM;
+ if (!is_ops_inited) {
+ ret = nbl_intr_setup_ops(&res_ops);
+ if (ret)
+ goto setup_fail;
+ is_ops_inited = true;
+ }
+
(*res_ops_tbl)->ops = &res_ops;
(*res_ops_tbl)->priv = res_mgt_leonis;
return 0;
+
+setup_fail:
+ nbl_res_remove_ops(dev, res_ops_tbl);
+ return -EAGAIN;
}
static int nbl_res_ctrl_dev_setup_eth_info(struct nbl_resource_mgt *res_mgt)
@@ -416,6 +430,7 @@ static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis)
{
struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt;
+ nbl_intr_mgt_stop(res_mgt);
nbl_res_ctrl_dev_remove_vsi_info(res_mgt);
nbl_res_ctrl_dev_remove_eth_info(res_mgt);
nbl_res_ctrl_dev_sriov_info_remove(res_mgt);
@@ -454,6 +469,10 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis,
ret = nbl_res_ring_num_info_init(res_mgt);
if (ret)
goto start_fail;
+
+ ret = nbl_intr_mgt_start(res_mgt);
+ if (ret)
+ goto start_fail;
}
return 0;
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c
new file mode 100644
index 000000000000..97cabddcc02d
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c
@@ -0,0 +1,314 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include "nbl_interrupt.h"
+
+static int nbl_res_intr_destroy_msix_map(void *priv, u16 func_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct device *dma_dev;
+ struct nbl_hw_ops *hw_ops;
+ struct nbl_interrupt_mgt *intr_mgt;
+ struct nbl_msix_map_table *msix_map_table;
+ u16 *interrupts;
+ u16 intr_num;
+ u16 i;
+
+ if (!res_mgt)
+ return -EINVAL;
+
+ hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt);
+ dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt);
+
+ /* use ctrl dev bdf */
+ hw_ops->configure_msix_map(NBL_RES_MGT_TO_HW_PRIV(res_mgt), func_id,
+ false, 0, 0, 0, 0);
+
+ intr_num = intr_mgt->func_intr_res[func_id].num_interrupts;
+ interrupts = intr_mgt->func_intr_res[func_id].interrupts;
+
+ WARN_ON(!interrupts);
+ for (i = 0; i < intr_num; i++) {
+ if (interrupts[i] >= NBL_MAX_OTHER_INTERRUPT)
+ clear_bit(interrupts[i] - NBL_MAX_OTHER_INTERRUPT,
+ intr_mgt->interrupt_net_bitmap);
+ else
+ clear_bit(interrupts[i],
+ intr_mgt->interrupt_others_bitmap);
+
+ hw_ops->configure_msix_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ func_id, false, interrupts[i], 0, 0,
+ 0, false);
+ }
+
+ kfree(interrupts);
+ intr_mgt->func_intr_res[func_id].interrupts = NULL;
+ intr_mgt->func_intr_res[func_id].num_interrupts = 0;
+
+ msix_map_table = &intr_mgt->func_intr_res[func_id].msix_map_table;
+ dma_free_coherent(dma_dev, msix_map_table->size,
+ msix_map_table->base_addr, msix_map_table->dma);
+ msix_map_table->size = 0;
+ msix_map_table->base_addr = NULL;
+ msix_map_table->dma = 0;
+
+ return 0;
+}
+
+static int nbl_res_intr_configure_msix_map(void *priv, u16 func_id,
+ u16 num_net_msix,
+ u16 num_others_msix,
+ bool net_msix_mask_en)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct device *dma_dev;
+ struct nbl_hw_ops *hw_ops;
+ struct nbl_interrupt_mgt *intr_mgt;
+ struct nbl_common_info *common;
+ struct nbl_msix_map_table *msix_map_table;
+ struct nbl_msix_map *msix_map_entries;
+ u16 *interrupts;
+ u16 requested;
+ u16 intr_index;
+ u16 i;
+ u8 bus, devid, function;
+ bool msix_mask_en;
+ int ret = 0;
+
+ if (!res_mgt)
+ return -EINVAL;
+
+ hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt);
+ dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt);
+ common = NBL_RES_MGT_TO_COMMON(res_mgt);
+
+ if (intr_mgt->func_intr_res[func_id].interrupts)
+ nbl_res_intr_destroy_msix_map(priv, func_id);
+
+ nbl_res_func_id_to_bdf(res_mgt, func_id, &bus, &devid, &function);
+
+ msix_map_table = &intr_mgt->func_intr_res[func_id].msix_map_table;
+ WARN_ON(msix_map_table->base_addr);
+ msix_map_table->size =
+ sizeof(struct nbl_msix_map) * NBL_MSIX_MAP_TABLE_MAX_ENTRIES;
+ msix_map_table->base_addr = dma_alloc_coherent(dma_dev,
+ msix_map_table->size,
+ &msix_map_table->dma,
+ GFP_ATOMIC | __GFP_ZERO);
+ if (!msix_map_table->base_addr) {
+ pr_err("Allocate DMA memory for function msix map table failed\n");
+ msix_map_table->size = 0;
+ return -ENOMEM;
+ }
+
+ requested = num_net_msix + num_others_msix;
+ interrupts = kcalloc(requested, sizeof(interrupts[0]), GFP_ATOMIC);
+ if (!interrupts) {
+ pr_err("Allocate function interrupts array failed\n");
+ ret = -ENOMEM;
+ goto alloc_interrupts_err;
+ }
+
+ intr_mgt->func_intr_res[func_id].interrupts = interrupts;
+ intr_mgt->func_intr_res[func_id].num_interrupts = requested;
+ intr_mgt->func_intr_res[func_id].num_net_interrupts = num_net_msix;
+
+ for (i = 0; i < num_net_msix; i++) {
+ intr_index = find_first_zero_bit(intr_mgt->interrupt_net_bitmap,
+ NBL_MAX_NET_INTERRUPT);
+ if (intr_index == NBL_MAX_NET_INTERRUPT) {
+ pr_err("There is no available interrupt left\n");
+ ret = -EAGAIN;
+ goto get_interrupt_err;
+ }
+ interrupts[i] = intr_index + NBL_MAX_OTHER_INTERRUPT;
+ set_bit(intr_index, intr_mgt->interrupt_net_bitmap);
+ }
+
+ for (i = num_net_msix; i < requested; i++) {
+ intr_index =
+ find_first_zero_bit(intr_mgt->interrupt_others_bitmap,
+ NBL_MAX_OTHER_INTERRUPT);
+ if (intr_index == NBL_MAX_OTHER_INTERRUPT) {
+ pr_err("There is no available interrupt left\n");
+ ret = -EAGAIN;
+ goto get_interrupt_err;
+ }
+ interrupts[i] = intr_index;
+ set_bit(intr_index, intr_mgt->interrupt_others_bitmap);
+ }
+
+ msix_map_entries = msix_map_table->base_addr;
+ for (i = 0; i < requested; i++) {
+ msix_map_entries[i].global_msix_index = interrupts[i];
+ msix_map_entries[i].valid = 1;
+
+ if (i < num_net_msix && net_msix_mask_en)
+ msix_mask_en = 1;
+ else
+ msix_mask_en = 0;
+ hw_ops->configure_msix_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ func_id, true, interrupts[i], bus,
+ devid, function, msix_mask_en);
+ }
+
+ /* use ctrl dev bdf */
+ hw_ops->configure_msix_map(NBL_RES_MGT_TO_HW_PRIV(res_mgt), func_id,
+ true, msix_map_table->dma, common->hw_bus,
+ common->devid,
+ NBL_COMMON_TO_PCI_FUNC_ID(common));
+
+ return 0;
+
+get_interrupt_err:
+ while (i--) {
+ intr_index = interrupts[i];
+ if (intr_index >= NBL_MAX_OTHER_INTERRUPT)
+ clear_bit(intr_index - NBL_MAX_OTHER_INTERRUPT,
+ intr_mgt->interrupt_net_bitmap);
+ else
+ clear_bit(intr_index,
+ intr_mgt->interrupt_others_bitmap);
+ }
+ kfree(interrupts);
+ intr_mgt->func_intr_res[func_id].num_interrupts = 0;
+ intr_mgt->func_intr_res[func_id].interrupts = NULL;
+
+alloc_interrupts_err:
+ dma_free_coherent(dma_dev, msix_map_table->size,
+ msix_map_table->base_addr, msix_map_table->dma);
+ msix_map_table->size = 0;
+ msix_map_table->base_addr = NULL;
+ msix_map_table->dma = 0;
+
+ return ret;
+}
+
+static int nbl_res_intr_enable_mailbox_irq(void *priv, u16 func_id,
+ u16 vector_id, bool enable_msix)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_hw_ops *hw_ops;
+ struct nbl_interrupt_mgt *intr_mgt;
+ u16 global_vec_id;
+
+ hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt);
+
+ global_vec_id = intr_mgt->func_intr_res[func_id].interrupts[vector_id];
+ hw_ops->enable_mailbox_irq(NBL_RES_MGT_TO_HW_PRIV(res_mgt), func_id,
+ enable_msix, global_vec_id);
+
+ return 0;
+}
+
+static u8 __iomem *nbl_res_get_msix_irq_enable_info(void *priv,
+ u16 global_vec_id,
+ u32 *irq_data)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_hw_ops *hw_ops;
+
+ hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+
+ return hw_ops->get_msix_irq_enable_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_vec_id, irq_data);
+}
+
+static u16 nbl_res_intr_get_global_vector(void *priv, u16 vsi_id,
+ u16 local_vec_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_interrupt_mgt *intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt);
+ u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+
+ return intr_mgt->func_intr_res[func_id].interrupts[local_vec_id];
+}
+
+static u16 nbl_res_intr_get_msix_entry_id(void *priv, u16 vsi_id,
+ u16 local_vec_id)
+{
+ return local_vec_id;
+}
+
+/* NBL_INTR_SET_OPS(ops_name, func)
+ *
+ * Use X Macros to reduce setup and remove codes.
+ */
+#define NBL_INTR_OPS_TBL \
+do { \
+ NBL_INTR_SET_OPS(configure_msix_map, \
+ nbl_res_intr_configure_msix_map); \
+ NBL_INTR_SET_OPS(destroy_msix_map, \
+ nbl_res_intr_destroy_msix_map); \
+ NBL_INTR_SET_OPS(enable_mailbox_irq, \
+ nbl_res_intr_enable_mailbox_irq); \
+ NBL_INTR_SET_OPS(get_msix_irq_enable_info, \
+ nbl_res_get_msix_irq_enable_info); \
+ NBL_INTR_SET_OPS(get_global_vector, \
+ nbl_res_intr_get_global_vector); \
+ NBL_INTR_SET_OPS(get_msix_entry_id, \
+ nbl_res_intr_get_msix_entry_id); \
+} while (0)
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_intr_setup_mgt(struct device *dev,
+ struct nbl_interrupt_mgt **intr_mgt)
+{
+ *intr_mgt =
+ devm_kzalloc(dev, sizeof(struct nbl_interrupt_mgt), GFP_KERNEL);
+ if (!*intr_mgt)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void nbl_intr_remove_mgt(struct device *dev,
+ struct nbl_interrupt_mgt **intr_mgt)
+{
+ devm_kfree(dev, *intr_mgt);
+ *intr_mgt = NULL;
+}
+
+int nbl_intr_mgt_start(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev;
+ struct nbl_interrupt_mgt **intr_mgt;
+
+ dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ intr_mgt = &NBL_RES_MGT_TO_INTR_MGT(res_mgt);
+
+ return nbl_intr_setup_mgt(dev, intr_mgt);
+}
+
+void nbl_intr_mgt_stop(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev;
+ struct nbl_interrupt_mgt **intr_mgt;
+
+ dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ intr_mgt = &NBL_RES_MGT_TO_INTR_MGT(res_mgt);
+
+ if (!(*intr_mgt))
+ return;
+
+ nbl_intr_remove_mgt(dev, intr_mgt);
+}
+
+int nbl_intr_setup_ops(struct nbl_resource_ops *res_ops)
+{
+#define NBL_INTR_SET_OPS(name, func) \
+ do { \
+ res_ops->NBL_NAME(name) = func; \
+ ; \
+ } while (0)
+ NBL_INTR_OPS_TBL;
+#undef NBL_INTR_SET_OPS
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h
new file mode 100644
index 000000000000..5448bcf36416
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_INTERRUPT_H_
+#define _NBL_INTERRUPT_H_
+
+#include "nbl_resource.h"
+
+#define NBL_MSIX_MAP_TABLE_MAX_ENTRIES (1024)
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
index f6fd151141d5..b8653e0039d7 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
@@ -10,12 +10,20 @@
#include "nbl_core.h"
#include "nbl_hw.h"
+struct nbl_resource_mgt;
+
#define NBL_RES_MGT_TO_COMMON(res_mgt) ((res_mgt)->common)
#define NBL_RES_MGT_TO_COMMON_OPS(res_mgt) (&((res_mgt)->common_ops))
#define NBL_RES_MGT_TO_DEV(res_mgt) \
NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt))
#define NBL_RES_MGT_TO_DMA_DEV(res_mgt) \
NBL_COMMON_TO_DMA_DEV(NBL_RES_MGT_TO_COMMON(res_mgt))
+#define NBL_RES_MGT_TO_INTR_MGT(res_mgt) ((res_mgt)->intr_mgt)
+#define NBL_RES_MGT_TO_QUEUE_MGT(res_mgt) ((res_mgt)->queue_mgt)
+#define NBL_RES_MGT_TO_TXRX_MGT(res_mgt) ((res_mgt)->txrx_mgt)
+#define NBL_RES_MGT_TO_VSI_MGT(res_mgt) ((res_mgt)->vsi_mgt)
+#define NBL_RES_MGT_TO_INTR_MGT(res_mgt) ((res_mgt)->intr_mgt)
+#define NBL_RES_MGT_TO_PROD_OPS(res_mgt) ((res_mgt)->product_ops)
#define NBL_RES_MGT_TO_RES_INFO(res_mgt) ((res_mgt)->resource_info)
#define NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) \
(NBL_RES_MGT_TO_RES_INFO(res_mgt)->sriov_info)
@@ -35,13 +43,261 @@
(NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->ops)
#define NBL_RES_MGT_TO_CHAN_PRIV(res_mgt) \
(NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->priv)
+#define NBL_RES_MGT_TO_TX_RING(res_mgt, index) \
+ (NBL_RES_MGT_TO_TXRX_MGT(res_mgt)->tx_rings[(index)])
+#define NBL_RES_MGT_TO_RX_RING(res_mgt, index) \
+ (NBL_RES_MGT_TO_TXRX_MGT(res_mgt)->rx_rings[(index)])
+#define NBL_RES_MGT_TO_VECTOR(res_mgt, index) \
+ (NBL_RES_MGT_TO_TXRX_MGT(res_mgt)->vectors[(index)])
+
+#define NBL_RES_BASE_QID(res_mgt) NBL_RES_MGT_TO_RES_INFO(res_mgt)->base_qid
+#define NBL_RES_NOFITY_QID(res_mgt, local_qid) \
+ (NBL_RES_BASE_QID(res_mgt) * 2 + (local_qid))
#define NBL_MAX_NET_ID NBL_MAX_FUNC
#define NBL_MAX_JUMBO_FRAME_SIZE (9600)
#define NBL_PKT_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))
+
+/* --------- QUEUE ---------- */
+#define NBL_MAX_TXRX_QUEUE (2048)
#define NBL_DEFAULT_DESC_NUM (1024)
#define NBL_MAX_TXRX_QUEUE_PER_FUNC (256)
+
#define NBL_DEFAULT_PF_HW_QUEUE_NUM (16)
+#define NBL_VSI_PF_LEGACY_QUEUE_NUM_MAX (NBL_MAX_TXRX_QUEUE_PER_FUNC)
+
+#define NBL_SPECIFIC_VSI_NET_ID_OFFSET (4)
+#define NBL_MAX_CACHE_SIZE (256)
+#define NBL_MAX_BATCH_DESC (64)
+
+enum nbl_qid_map_table_type {
+ NBL_MASTER_QID_MAP_TABLE,
+ NBL_SLAVE_QID_MAP_TABLE,
+ NBL_QID_MAP_TABLE_MAX
+};
+
+struct nbl_queue_vsi_info {
+ u32 curr_qps;
+ u16 curr_qps_static; /* This will not be reset when netdev down */
+ u16 vsi_index;
+ u16 vsi_id;
+ u16 rss_ret_base;
+ u16 rss_entry_size;
+ u16 net_id;
+ u16 queue_offset;
+ u16 queue_num;
+ bool rss_vld;
+ bool vld;
+};
+
+struct nbl_queue_info {
+ struct nbl_queue_vsi_info vsi_info[NBL_VSI_MAX];
+ u64 notify_addr;
+ u32 qid_map_index;
+ u16 num_txrx_queues;
+ u16 rss_ret_base;
+ u16 *txrx_queues;
+ u16 *queues_context;
+ u32 *uvn_stat_pkt_drop;
+ u16 rss_entry_size;
+ u16 split;
+ u32 curr_qps;
+ u16 queue_size;
+};
+
+struct nbl_queue_mgt {
+ DECLARE_BITMAP(txrx_queue_bitmap, NBL_MAX_TXRX_QUEUE);
+ DECLARE_BITMAP(rss_ret_bitmap, NBL_EPRO_RSS_RET_TBL_DEPTH);
+ struct nbl_qid_map_table qid_map_table[NBL_QID_MAP_TABLE_ENTRIES];
+ struct nbl_queue_info queue_info[NBL_MAX_FUNC];
+ u16 net_id_ref_vsinum[NBL_MAX_NET_ID];
+ u32 total_qid_map_entries;
+ int qid_map_select;
+ bool qid_map_ready;
+ u32 qid_map_tail[NBL_QID_MAP_TABLE_MAX];
+};
+
+/* --------- INTERRUPT ---------- */
+#define NBL_MAX_OTHER_INTERRUPT 1024
+#define NBL_MAX_NET_INTERRUPT 4096
+
+struct nbl_msix_map {
+ u16 valid:1;
+ u16 global_msix_index:13;
+ u16 rsv:2;
+};
+
+struct nbl_msix_map_table {
+ struct nbl_msix_map *base_addr;
+ dma_addr_t dma;
+ size_t size;
+};
+
+struct nbl_func_interrupt_resource_mng {
+ u16 num_interrupts;
+ u16 num_net_interrupts;
+ u16 msix_base;
+ u16 msix_max;
+ u16 *interrupts;
+ struct nbl_msix_map_table msix_map_table;
+};
+
+struct nbl_interrupt_mgt {
+ DECLARE_BITMAP(interrupt_net_bitmap, NBL_MAX_NET_INTERRUPT);
+ DECLARE_BITMAP(interrupt_others_bitmap, NBL_MAX_OTHER_INTERRUPT);
+ struct nbl_func_interrupt_resource_mng func_intr_res[NBL_MAX_FUNC];
+};
+
+/* --------- TXRX ---------- */
+struct nbl_txrx_vsi_info {
+ u16 ring_offset;
+ u16 ring_num;
+};
+
+struct nbl_ring_desc {
+ /* buffer address */
+ __le64 addr;
+ /* buffer length */
+ __le32 len;
+ /* buffer ID */
+ __le16 id;
+ /* the flags depending on descriptor type */
+ __le16 flags;
+};
+
+struct nbl_tx_buffer {
+ struct nbl_ring_desc *next_to_watch;
+ union {
+ struct sk_buff *skb;
+ };
+ dma_addr_t dma;
+ u32 len;
+
+ unsigned int bytecount;
+ unsigned short gso_segs;
+ bool page;
+ u32 tx_flags;
+};
+
+struct nbl_dma_info {
+ dma_addr_t addr;
+ struct page *page;
+ u32 size;
+};
+
+struct nbl_page_cache {
+ u32 head;
+ u32 tail;
+ struct nbl_dma_info page_cache[NBL_MAX_CACHE_SIZE];
+};
+
+struct nbl_rx_buffer {
+ struct nbl_dma_info *di;
+ u16 offset;
+ u16 rx_pad;
+ u16 size;
+ bool last_in_page;
+ bool first_in_page;
+};
+
+struct nbl_res_vector {
+ struct nbl_napi_struct nbl_napi;
+ struct nbl_res_tx_ring *tx_ring;
+ struct nbl_res_rx_ring *rx_ring;
+ u8 __iomem *irq_enable_base;
+ u32 irq_data;
+ bool started;
+ bool net_msix_mask_en;
+};
+
+struct nbl_res_tx_ring {
+ /*data path*/
+ struct nbl_ring_desc *desc;
+ struct nbl_tx_buffer *tx_bufs;
+ struct device *dma_dev;
+ struct net_device *netdev;
+ u8 __iomem *notify_addr;
+ struct nbl_queue_stats stats;
+ struct u64_stats_sync syncp;
+ struct nbl_tx_queue_stats tx_stats;
+ enum nbl_product_type product_type;
+ u16 queue_index;
+ u16 desc_num;
+ u16 notify_qid;
+ u16 avail_used_flags;
+ /* device ring wrap counter */
+ bool used_wrap_counter;
+ u16 next_to_use;
+ u16 next_to_clean;
+ u16 tail_ptr;
+ u16 mode;
+ u16 vlan_tci;
+ u16 vlan_proto;
+ u8 eth_id;
+ u8 extheader_tx_len;
+
+ /* control path */
+ // dma for desc[]
+ dma_addr_t dma;
+ // size for desc[]
+ unsigned int size;
+ bool valid;
+
+ struct nbl_txrx_vsi_info *vsi_info;
+} ____cacheline_internodealigned_in_smp;
+
+struct nbl_res_rx_ring {
+ /* data path */
+ struct nbl_ring_desc *desc;
+ struct nbl_rx_buffer *rx_bufs;
+ struct nbl_dma_info *di;
+ struct device *dma_dev;
+ struct net_device *netdev;
+ struct page_pool *page_pool;
+ struct nbl_queue_stats stats;
+ struct nbl_rx_queue_stats rx_stats;
+ struct u64_stats_sync syncp;
+ struct nbl_page_cache page_cache;
+
+ enum nbl_product_type product_type;
+ u32 buf_len;
+ u16 avail_used_flags;
+ bool used_wrap_counter;
+ u8 nid;
+ u16 next_to_use;
+ u16 next_to_clean;
+ u16 tail_ptr;
+ u16 mode;
+ u16 desc_num;
+ u16 queue_index;
+ u16 vlan_tci;
+ u16 vlan_proto;
+ bool linear_skb;
+
+ /* control path */
+ struct nbl_common_info *common;
+ void *txrx_mgt;
+ // dma for desc[]
+ dma_addr_t dma;
+ // size for desc[]
+ unsigned int size;
+ bool valid;
+ u16 notify_qid;
+
+ u16 frags_num_per_page;
+} ____cacheline_internodealigned_in_smp;
+
+struct nbl_txrx_mgt {
+ struct nbl_res_vector **vectors;
+ struct nbl_res_tx_ring **tx_rings;
+ struct nbl_res_rx_ring **rx_rings;
+ struct nbl_txrx_vsi_info vsi_info[NBL_VSI_MAX];
+ u16 tx_ring_num;
+ u16 rx_ring_num;
+};
+
+struct nbl_vsi_mgt {
+};
/* --------- INFO ---------- */
struct nbl_sriov_info {
@@ -112,6 +368,18 @@ struct nbl_resource_common_ops {
};
struct nbl_res_product_ops {
+ /* for queue */
+ void (*queue_mgt_init)(struct nbl_queue_mgt *queue_mgt);
+ int (*setup_qid_map_table)(struct nbl_resource_mgt *res_mgt,
+ u16 func_id, u64 notify_addr);
+ void (*remove_qid_map_table)(struct nbl_resource_mgt *res_mgt,
+ u16 func_id);
+ int (*init_qid_map_table)(struct nbl_resource_mgt *res_mgt,
+ struct nbl_queue_mgt *queue_mgt,
+ struct nbl_hw_ops *hw_ops);
+
+ /* for intr */
+ void (*nbl_intr_mgt_init)(struct nbl_resource_mgt *res_mgt);
};
struct nbl_resource_mgt {
@@ -151,6 +419,9 @@ int nbl_res_func_id_to_bdf(struct nbl_resource_mgt *res_mgt, u16 func_id,
u64 nbl_res_get_func_bar_base_addr(struct nbl_resource_mgt *res_mgt,
u16 func_id);
u8 nbl_res_vsi_id_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id);
+int nbl_intr_mgt_start(struct nbl_resource_mgt *res_mgt);
+void nbl_intr_mgt_stop(struct nbl_resource_mgt *res_mgt);
+int nbl_intr_setup_ops(struct nbl_resource_ops *resource_ops);
void nbl_res_pf_dev_vsi_type_to_hw_vsi_type(u16 src_type,
enum nbl_vsi_serv_type *dst_type);
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
index ef8ccb5707f5..9bc90fe824bd 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
@@ -16,10 +16,19 @@ struct nbl_resource_pt_ops {
};
struct nbl_resource_ops {
+ int (*configure_msix_map)(void *priv, u16 func_id, u16 num_net_msix,
+ u16 num_others_msix, bool net_msix_mask_en);
+ int (*destroy_msix_map)(void *priv, u16 func_id);
+ int (*enable_mailbox_irq)(void *priv, u16 func_id, u16 vector_id,
+ bool enable_msix);
+ u16 (*get_global_vector)(void *priv, u16 vsi_id, u16 local_vec_id);
+ u16 (*get_msix_entry_id)(void *priv, u16 vsi_id, u16 local_vec_id);
int (*register_net)(void *priv, u16 func_id,
struct nbl_register_net_param *register_param,
struct nbl_register_net_result *register_result);
int (*unregister_net)(void *priv, u16 func_id);
+ u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id,
+ u32 *irq_data);
u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type);
void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id,
u8 *logic_eth_id);
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index 53fbcf43149b..651100e59f0a 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -12,13 +12,26 @@
/* ------ Basic definitions ------- */
#define NBL_DRIVER_NAME "nbl_core"
+#define NBL_PAIR_ID_GET_TX(id) ((id) * 2 + 1)
+#define NBL_PAIR_ID_GET_RX(id) ((id) * 2)
+
#define NBL_MAX_PF 8
+
+#define NBL_RATE_MBPS_100G 100000
+#define NBL_RATE_MBPS_25G 25000
+#define NBL_RATE_MBPS_10G 10000
+
#define NBL_NEXT_ID(id, max) \
({ \
typeof(id) _id = (id); \
((_id) == (max) ? 0 : (_id) + 1); \
})
+
#define NBL_MAX_FUNC (520)
+
+/* Used for macros to pass checkpatch */
+#define NBL_NAME(x) x
+
enum nbl_product_type {
NBL_LEONIS_TYPE,
NBL_PRODUCT_MAX,
@@ -60,13 +73,111 @@ struct nbl_txrx_queue_param {
u16 rxcsum;
};
-struct nbl_vnet_queue_info_param {
- u32 function_id;
- u32 device_id;
- u32 bus_id;
- u32 msix_idx;
- u32 msix_idx_valid;
- u32 valid;
+struct nbl_qid_map_table {
+ u32 local_qid;
+ u32 notify_addr_l;
+ u32 notify_addr_h;
+ u32 global_qid;
+ u32 ctrlq_flag;
+};
+
+struct nbl_qid_map_param {
+ struct nbl_qid_map_table *qid_map;
+ u16 start;
+ u16 len;
+};
+
+struct nbl_queue_cfg_param {
+ /* queue args*/
+ u64 desc;
+ u64 avail;
+ u64 used;
+ u16 size;
+ u16 extend_header;
+ u16 split;
+ u16 last_avail_idx;
+ u16 global_queue_id;
+
+ /*interrupt args*/
+ u16 global_vector;
+ u16 intr_en;
+ u16 intr_mask;
+
+ /* dvn args */
+ u16 tx;
+
+ /* uvn args*/
+ u16 rxcsum;
+ u16 half_offload_en;
+};
+
+struct nbl_queue_stats {
+ u64 packets;
+ u64 bytes;
+ u64 descs;
+};
+
+struct nbl_tx_queue_stats {
+ u64 tso_packets;
+ u64 tso_bytes;
+ u64 tx_csum_packets;
+ u64 tx_busy;
+ u64 tx_dma_busy;
+ u64 tx_multicast_packets;
+ u64 tx_unicast_packets;
+ u64 tx_skb_free;
+ u64 tx_desc_addr_err_cnt;
+ u64 tx_desc_len_err_cnt;
+};
+
+struct nbl_rx_queue_stats {
+ u64 rx_csum_packets;
+ u64 rx_csum_errors;
+ u64 rx_multicast_packets;
+ u64 rx_unicast_packets;
+ u64 rx_desc_addr_err_cnt;
+ u64 rx_alloc_buf_err_cnt;
+ u64 rx_cache_reuse;
+ u64 rx_cache_full;
+ u64 rx_cache_empty;
+ u64 rx_cache_busy;
+ u64 rx_cache_waive;
+};
+
+struct nbl_stats {
+ /* for toe stats */
+ u64 tso_packets;
+ u64 tso_bytes;
+ u64 tx_csum_packets;
+ u64 rx_csum_packets;
+ u64 rx_csum_errors;
+ u64 tx_busy;
+ u64 tx_dma_busy;
+ u64 tx_multicast_packets;
+ u64 tx_unicast_packets;
+ u64 rx_multicast_packets;
+ u64 rx_unicast_packets;
+ u64 tx_skb_free;
+ u64 tx_desc_addr_err_cnt;
+ u64 tx_desc_len_err_cnt;
+ u64 rx_desc_addr_err_cnt;
+ u64 rx_alloc_buf_err_cnt;
+ u64 rx_cache_reuse;
+ u64 rx_cache_full;
+ u64 rx_cache_empty;
+ u64 rx_cache_busy;
+ u64 rx_cache_waive;
+ u64 tx_packets;
+ u64 tx_bytes;
+ u64 rx_packets;
+ u64 rx_bytes;
+};
+
+enum nbl_fw_port_speed {
+ NBL_FW_PORT_SPEED_10G,
+ NBL_FW_PORT_SPEED_25G,
+ NBL_FW_PORT_SPEED_50G,
+ NBL_FW_PORT_SPEED_100G,
};
enum {
@@ -102,9 +213,21 @@ enum {
};
#define NBL_FEATURE(name) (1 << (NBL_##name##_BIT))
-#define NBL_FEATURE_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1)
+
enum nbl_performance_mode {
NBL_QUIRKS_NO_TOE,
NBL_QUIRKS_UVN_PREFETCH_ALIGN,
};
+
+struct nbl_ring_param {
+ u16 tx_ring_num;
+ u16 rx_ring_num;
+ u16 queue_size;
+};
+
+struct nbl_napi_struct {
+ struct napi_struct napi;
+ atomic_t is_irq;
+};
+
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 07/15] net/nebula-matrix: add queue resource implementation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (5 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 06/15] net/nebula-matrix: add intr resource implementation illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 08/15] net/nebula-matrix: add vsi " illusion.wang
` (8 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
1. Queue Resource Management
Allocation & Release:
nbl_res_queue_alloc_txrx_queues: Allocates transmit/receive queues for
a VSI (Virtual Station Interface).
nbl_res_queue_free_txrx_queues: Releases queue resources.
Initialization:
nbl_res_queue_setup_queue_info: Initializes queue metadata (e.g., queue
count, context).
nbl_res_queue_remove_queue_info: Cleans up queue metadata.
2. Queue Mapping & Configuration
QID Mapping Table:
nbl_res_queue_setup_qid_map_table_leonis: Sets up mappings from local
QIDs to global QIDs.
nbl_res_queue_remove_qid_map_table_leonis: Removes entries from the
mapping table.
Hardware Configuration:
nbl_res_queue_setup_hw_dq: Configures hardware TX/RX queues (descriptor
addresses, interrupt settings).
nbl_res_queue_remove_all_hw_dq: Disables and cleans up hardware queues.
3. RSS (Receive Side Scaling) Support
RSS Table Initialization:
nbl_res_queue_init_epro_rss_key: Initializes RSS hash keys.
nbl_res_queue_setup_rss: Allocates RSS indir table entries for a VSI.
nbl_res_queue_remove_rss: Releases RSS resources.
Indirection Table:
nbl_res_queue_set_rxfh_indir: Configures the RSS indirection table for
traffic distribution.
4. VSI-Queue Association
VSI Registration:
nbl_res_queue_register_vsi2q: Registers a VSI with the queue manager.
nbl_res_queue_setup_q2vsi/nbl_res_queue_remove_q2vsi: Binds/unbinds
queues to a VSI.
Queue Group Configuration:
nbl_res_queue_setup_cqs: Configures queue groups (e.g., for multi-queue
load balancing).
nbl_res_queue_remove_cqs: Cleans up queue groups.
5. Traffic Scheduling & Shaping
Scheduling:
nbl_res_queue_cfg_dsch: Maps queues to traffic classes (QoS).
Hardware Initialization:
nbl_res_queue_init_epro_vpt_table: Initializes virtual port tables.
nbl_res_queue_init_ipro_dn_sport_tbl: Configures downstream traffic
tables.
6. Initialization & Cleanup
Module Initialization:
nbl_res_queue_init: Initializes the queue manager (e.g., mapping tables,
RSS tables).
nbl_res_queue_clear_queues: Cleans up all queue resources.
Operation Registration:
nbl_queue_setup_ops_leonis: Registers queue management function pointers.
nbl_queue_remove_ops_leonis: Unregisters operations.
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/Makefile | 2 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 955 ++++++++++++++
.../nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c | 1129 +++++++++++++++++
.../nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h | 15 +
.../nbl_hw_leonis/nbl_resource_leonis.c | 13 +
.../nbl_hw_leonis/nbl_resource_leonis.h | 12 +
.../nebula-matrix/nbl/nbl_hw/nbl_queue.c | 60 +
.../nebula-matrix/nbl/nbl_hw/nbl_queue.h | 11 +
.../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 4 +
.../nbl/nbl_include/nbl_def_hw.h | 48 +
.../nbl/nbl_include/nbl_def_resource.h | 27 +
.../nbl/nbl_include/nbl_include.h | 28 +
12 files changed, 2304 insertions(+)
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
index d27f0bd7f81b..1e873a4cd31e 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -7,10 +7,12 @@ obj-$(CONFIG_NBL_CORE) := nbl_core.o
nbl_core-objs += nbl_common/nbl_common.o \
nbl_channel/nbl_channel.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \
+ nbl_hw/nbl_hw_leonis/nbl_queue_leonis.o \
nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \
nbl_hw/nbl_resource.o \
nbl_hw/nbl_interrupt.o \
+ nbl_hw/nbl_queue.o \
nbl_core/nbl_dispatch.o \
nbl_core/nbl_service.o \
nbl_core/nbl_dev.o \
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
index baad9e6ad4c9..9b3ba97428a9 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
@@ -4,7 +4,13 @@
* Author:
*/
+#include <linux/if_bridge.h>
+
#include "nbl_hw_leonis.h"
+#include "nbl_hw/nbl_hw_leonis/base/nbl_datapath.h"
+#include "nbl_hw/nbl_hw_leonis/base/nbl_ppe.h"
+#include "nbl_hw_leonis_regs.h"
+
static u32 nbl_hw_get_quirks(void *priv)
{
struct nbl_hw_mgt *hw_mgt = priv;
@@ -19,6 +25,924 @@ static u32 nbl_hw_get_quirks(void *priv)
return quirks;
}
+static int nbl_hw_init_qid_map_table(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_virtio_qid_map_table info = { 0 }, info2 = { 0 };
+ struct device *dev = NBL_HW_MGT_TO_DEV(hw_mgt);
+ u64 reg;
+ u16 i, j, k;
+
+ memset(&info, 0, sizeof(info));
+ info.local_qid = 0x1FF;
+ info.notify_addr_l = 0x7FFFFF;
+ info.notify_addr_h = 0xFFFFFFFF;
+ info.global_qid = 0xFFF;
+ info.ctrlq_flag = 0X1;
+ info.rsv1 = 0;
+ info.rsv2 = 0;
+
+ for (k = 0; k < 2; k++) { /* 0 is primary table , 1 is standby table */
+ for (i = 0; i < NBL_QID_MAP_TABLE_ENTRIES; i++) {
+ j = 0;
+ do {
+ reg = NBL_PCOMPLETER_QID_MAP_REG_ARR(k, i);
+ nbl_hw_wr_regs(hw_mgt, reg, (u8 *)&info,
+ sizeof(info));
+ nbl_hw_rd_regs(hw_mgt, reg, (u8 *)&info2,
+ sizeof(info2));
+ if (likely(!memcmp(&info, &info2,
+ sizeof(info))))
+ break;
+ j++;
+ } while (j < NBL_REG_WRITE_MAX_TRY_TIMES);
+
+ if (j == NBL_REG_WRITE_MAX_TRY_TIMES)
+ dev_err(dev,
+ "Write to qid map table entry %hu failed\n",
+ i);
+ }
+ }
+
+ return 0;
+}
+
+static int nbl_hw_set_qid_map_table(void *priv, void *data, int qid_map_select)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct nbl_qid_map_param *param = (struct nbl_qid_map_param *)data;
+ struct nbl_virtio_qid_map_table info = { 0 }, info_data = { 0 };
+ struct nbl_queue_table_select select = { 0 };
+ u64 reg;
+ int i, j;
+
+ for (i = 0; i < param->len; i++) {
+ j = 0;
+
+ info.local_qid = param->qid_map[i].local_qid;
+ info.notify_addr_l = param->qid_map[i].notify_addr_l;
+ info.notify_addr_h = param->qid_map[i].notify_addr_h;
+ info.global_qid = param->qid_map[i].global_qid;
+ info.ctrlq_flag = param->qid_map[i].ctrlq_flag;
+
+ do {
+ reg = NBL_PCOMPLETER_QID_MAP_REG_ARR(qid_map_select,
+ param->start + i);
+ nbl_hw_wr_regs(hw_mgt, reg, (u8 *)(&info),
+ sizeof(info));
+ nbl_hw_rd_regs(hw_mgt, reg, (u8 *)(&info_data),
+ sizeof(info_data));
+ if (likely(!memcmp(&info, &info_data, sizeof(info))))
+ break;
+ j++;
+ } while (j < NBL_REG_WRITE_MAX_TRY_TIMES);
+
+ if (j == NBL_REG_WRITE_MAX_TRY_TIMES)
+ nbl_err(common,
+ "Write to qid map table entry %d failed\n",
+ param->start + i);
+ }
+
+ select.select = qid_map_select;
+ nbl_hw_wr_regs(hw_mgt, NBL_PCOMPLETER_QUEUE_TABLE_SELECT_REG,
+ (u8 *)&select, sizeof(select));
+
+ return 0;
+}
+
+static int nbl_hw_set_qid_map_ready(void *priv, bool ready)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_queue_table_ready queue_table_ready = { 0 };
+
+ queue_table_ready.ready = ready;
+ nbl_hw_wr_regs(hw_mgt, NBL_PCOMPLETER_QUEUE_TABLE_READY_REG,
+ (u8 *)&queue_table_ready, sizeof(queue_table_ready));
+
+ return 0;
+}
+
+static int nbl_hw_cfg_ipro_queue_tbl(void *priv, u16 queue_id, u16 vsi_id,
+ u8 enable)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_ipro_queue_tbl ipro_queue_tbl = { 0 };
+
+ ipro_queue_tbl.vsi_en = enable;
+ ipro_queue_tbl.vsi_id = vsi_id;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_IPRO_QUEUE_TBL(queue_id),
+ (u8 *)&ipro_queue_tbl, sizeof(ipro_queue_tbl));
+
+ return 0;
+}
+
+static int nbl_hw_cfg_ipro_dn_sport_tbl(void *priv, u16 vsi_id, u16 dst_eth_id,
+ u16 bmode, bool binit)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_ipro_dn_src_port_tbl dpsport = { 0 };
+
+ if (binit) {
+ dpsport.entry_vld = 1;
+ dpsport.hw_flow = 1;
+ dpsport.set_dport.dport.down.upcall_flag = AUX_FWD_TYPE_NML_FWD;
+ dpsport.set_dport.dport.down.port_type = SET_DPORT_TYPE_ETH_LAG;
+ dpsport.set_dport.dport.down.lag_vld = 0;
+ dpsport.set_dport.dport.down.eth_vld = 1;
+ dpsport.set_dport.dport.down.eth_id = dst_eth_id;
+ dpsport.vlan_layer_num_1 = 3;
+ dpsport.set_dport_en = 1;
+ } else {
+ nbl_hw_rd_regs(hw_mgt, NBL_IPRO_DN_SRC_PORT_TABLE(vsi_id),
+ (u8 *)&dpsport,
+ sizeof(struct nbl_ipro_dn_src_port_tbl));
+ }
+
+ if (bmode == BRIDGE_MODE_VEPA)
+ dpsport.set_dport.dport.down.next_stg_sel = NEXT_STG_SEL_EPRO;
+ else
+ dpsport.set_dport.dport.down.next_stg_sel = NEXT_STG_SEL_NONE;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_IPRO_DN_SRC_PORT_TABLE(vsi_id),
+ (u8 *)&dpsport, sizeof(struct nbl_ipro_dn_src_port_tbl));
+
+ return 0;
+}
+
+static int nbl_hw_set_vnet_queue_info(void *priv,
+ struct nbl_vnet_queue_info_param *param,
+ u16 queue_id)
+{
+ struct nbl_hw_mgt_leonis *hw_mgt_leonis =
+ (struct nbl_hw_mgt_leonis *)priv;
+ struct nbl_hw_mgt *hw_mgt = &hw_mgt_leonis->hw_mgt;
+ struct nbl_host_vnet_qinfo host_vnet_qinfo = { 0 };
+
+ host_vnet_qinfo.function_id = param->function_id;
+ host_vnet_qinfo.device_id = param->device_id;
+ host_vnet_qinfo.bus_id = param->bus_id;
+ host_vnet_qinfo.valid = param->valid;
+ host_vnet_qinfo.msix_idx = param->msix_idx;
+ host_vnet_qinfo.msix_idx_valid = param->msix_idx_valid;
+
+ if (hw_mgt_leonis->ro_enable) {
+ host_vnet_qinfo.ido_en = 1;
+ host_vnet_qinfo.rlo_en = 1;
+ }
+
+ nbl_hw_wr_regs(hw_mgt, NBL_PADPT_HOST_VNET_QINFO_REG_ARR(queue_id),
+ (u8 *)&host_vnet_qinfo, sizeof(host_vnet_qinfo));
+
+ return 0;
+}
+
+static int nbl_hw_clear_vnet_queue_info(void *priv, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_host_vnet_qinfo host_vnet_qinfo = { 0 };
+
+ nbl_hw_wr_regs(hw_mgt, NBL_PADPT_HOST_VNET_QINFO_REG_ARR(queue_id),
+ (u8 *)&host_vnet_qinfo, sizeof(host_vnet_qinfo));
+ return 0;
+}
+
+static int nbl_hw_reset_dvn_cfg(void *priv, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct nbl_dvn_queue_reset queue_reset = { 0 };
+ struct nbl_dvn_queue_reset_done queue_reset_done = { 0 };
+ int i = 0;
+
+ queue_reset.dvn_queue_index = queue_id;
+ queue_reset.vld = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_DVN_QUEUE_RESET_REG, (u8 *)&queue_reset,
+ sizeof(queue_reset));
+
+ udelay(5);
+ nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_RESET_DONE_REG,
+ (u8 *)&queue_reset_done, sizeof(queue_reset_done));
+ while (!queue_reset_done.flag) {
+ i++;
+ if (!(i % 10)) {
+ nbl_err(common,
+ "Wait too long for tx queue reset to be done");
+ break;
+ }
+
+ udelay(5);
+ nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_RESET_DONE_REG,
+ (u8 *)&queue_reset_done,
+ sizeof(queue_reset_done));
+ }
+
+ nbl_debug(common, "dvn:%u cfg reset succedd, wait %d 5ns\n", queue_id,
+ i);
+ return 0;
+}
+
+static int nbl_hw_reset_uvn_cfg(void *priv, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct nbl_uvn_queue_reset queue_reset = { 0 };
+ struct nbl_uvn_queue_reset_done queue_reset_done = { 0 };
+ int i = 0;
+
+ queue_reset.index = queue_id;
+ queue_reset.vld = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_RESET_REG, (u8 *)&queue_reset,
+ sizeof(queue_reset));
+
+ udelay(5);
+ nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_RESET_DONE_REG,
+ (u8 *)&queue_reset_done, sizeof(queue_reset_done));
+ while (!queue_reset_done.flag) {
+ i++;
+ if (!(i % 10)) {
+ nbl_err(common,
+ "Wait too long for rx queue reset to be done");
+ break;
+ }
+
+ udelay(5);
+ nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_RESET_DONE_REG,
+ (u8 *)&queue_reset_done,
+ sizeof(queue_reset_done));
+ }
+
+ nbl_debug(common, "uvn:%u cfg reset succedd, wait %d 5ns\n", queue_id,
+ i);
+ return 0;
+}
+
+static int nbl_hw_restore_dvn_context(void *priv, u16 queue_id, u16 split,
+ u16 last_avail_index)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct dvn_queue_context cxt = { 0 };
+
+ cxt.dvn_ring_wrap_counter = last_avail_index >> 15;
+ if (split)
+ cxt.dvn_avail_ring_read = last_avail_index;
+ else
+ cxt.dvn_l1_ring_read = last_avail_index & 0x7FFF;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_DVN_QUEUE_CXT_TABLE_ARR(queue_id),
+ (u8 *)&cxt, sizeof(cxt));
+ nbl_info(common, "config tx ring: %u, last avail idx: %u\n", queue_id,
+ last_avail_index);
+
+ return 0;
+}
+
+static int nbl_hw_restore_uvn_context(void *priv, u16 queue_id, u16 split,
+ u16 last_avail_index)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct uvn_queue_cxt cxt = { 0 };
+
+ cxt.wrap_count = last_avail_index >> 15;
+ if (split)
+ cxt.queue_head = last_avail_index;
+ else
+ cxt.queue_head = last_avail_index & 0x7FFF;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_CXT_TABLE_ARR(queue_id),
+ (u8 *)&cxt, sizeof(cxt));
+ nbl_info(common, "config rx ring: %u, last avail idx: %u\n", queue_id,
+ last_avail_index);
+
+ return 0;
+}
+
+static int nbl_hw_cfg_tx_queue(void *priv, void *data, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_queue_cfg_param *queue_cfg =
+ (struct nbl_queue_cfg_param *)data;
+ struct dvn_queue_table info = { 0 };
+
+ info.dvn_queue_baddr = queue_cfg->desc;
+ if (!queue_cfg->split && !queue_cfg->extend_header)
+ queue_cfg->avail = queue_cfg->avail | 3;
+ info.dvn_avail_baddr = queue_cfg->avail;
+ info.dvn_used_baddr = queue_cfg->used;
+ info.dvn_queue_size = ilog2(queue_cfg->size);
+ info.dvn_queue_type = queue_cfg->split;
+ info.dvn_queue_en = 1;
+ info.dvn_extend_header_en = queue_cfg->extend_header;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_DVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info,
+ sizeof(info));
+
+ return 0;
+}
+
+static int nbl_hw_cfg_rx_queue(void *priv, void *data, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_queue_cfg_param *queue_cfg =
+ (struct nbl_queue_cfg_param *)data;
+ struct uvn_queue_table info = { 0 };
+
+ info.queue_baddr = queue_cfg->desc;
+ info.avail_baddr = queue_cfg->avail;
+ info.used_baddr = queue_cfg->used;
+ info.queue_size_mask_pow = ilog2(queue_cfg->size);
+ info.queue_type = queue_cfg->split;
+ info.extend_header_en = queue_cfg->extend_header;
+ info.half_offload_en = queue_cfg->half_offload_en;
+ info.guest_csum_en = queue_cfg->rxcsum;
+ info.queue_enable = 1;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info,
+ sizeof(info));
+
+ return 0;
+}
+
+static int nbl_hw_cfg_q2tc_netid(void *priv, u16 queue_id, u16 netid, u16 vld)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct dsch_vn_q2tc_cfg_tbl info;
+
+ nbl_hw_rd_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id),
+ (u8 *)&info, sizeof(info));
+ info.tcid = (info.tcid & 0x7) | (netid << 3);
+ info.vld = vld;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id),
+ (u8 *)&info, sizeof(info));
+ return 0;
+}
+
+static void nbl_hw_active_shaping(void *priv, u16 func_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_shaping_net shaping_net = { 0 };
+ struct dsch_vn_sha2net_map_tbl sha2net = { 0 };
+ struct dsch_vn_net2sha_map_tbl net2sha = { 0 };
+
+ nbl_hw_rd_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net,
+ sizeof(shaping_net));
+
+ if (!shaping_net.depth)
+ return;
+
+ sha2net.vld = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_SHA2NET_MAP_TABLE_REG_ARR(func_id),
+ (u8 *)&sha2net, sizeof(sha2net));
+
+ shaping_net.valid = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net,
+ sizeof(shaping_net));
+
+ net2sha.vld = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_NET2SHA_MAP_TABLE_REG_ARR(func_id),
+ (u8 *)&net2sha, sizeof(net2sha));
+}
+
+static void nbl_hw_deactive_shaping(void *priv, u16 func_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_shaping_net shaping_net = { 0 };
+ struct dsch_vn_sha2net_map_tbl sha2net = { 0 };
+ struct dsch_vn_net2sha_map_tbl net2sha = { 0 };
+
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_NET2SHA_MAP_TABLE_REG_ARR(func_id),
+ (u8 *)&net2sha, sizeof(net2sha));
+
+ nbl_hw_rd_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net,
+ sizeof(shaping_net));
+ shaping_net.valid = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net,
+ sizeof(shaping_net));
+
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_SHA2NET_MAP_TABLE_REG_ARR(func_id),
+ (u8 *)&sha2net, sizeof(sha2net));
+}
+
+static int nbl_hw_cfg_dsch_net_to_group(void *priv, u16 func_id, u16 group_id,
+ u16 vld)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct dsch_vn_n2g_cfg_tbl info = { 0 };
+
+ info.grpid = group_id;
+ info.vld = vld;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_N2G_CFG_TABLE_REG_ARR(func_id),
+ (u8 *)&info, sizeof(info));
+ return 0;
+}
+
+static int nbl_hw_cfg_epro_rss_ret(void *priv, u32 index, u8 size_type,
+ u32 q_num, u16 *queue_list, const u32 *indir)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct nbl_epro_rss_ret_tbl rss_ret = { 0 };
+ u32 table_id, table_end, group_count, odd_num, queue_id = 0;
+
+ group_count = NBL_EPRO_RSS_ENTRY_SIZE_UNIT << size_type;
+ if (group_count > NBL_EPRO_RSS_ENTRY_MAX_COUNT) {
+ nbl_err(common,
+ "Rss group entry size type %u exceed the max value %u",
+ size_type, NBL_EPRO_RSS_ENTRY_SIZE_256);
+ return -EINVAL;
+ }
+
+ if (q_num > group_count) {
+ nbl_err(common, "q_num %u exceed the rss group count %u\n",
+ q_num, group_count);
+ return -EINVAL;
+ }
+ if (index >= NBL_EPRO_RSS_RET_TBL_DEPTH ||
+ (index + group_count) > NBL_EPRO_RSS_RET_TBL_DEPTH) {
+ nbl_err(common,
+ "index %u exceed the max table entry %u, entry size: %u\n",
+ index, NBL_EPRO_RSS_RET_TBL_DEPTH, group_count);
+ return -EINVAL;
+ }
+
+ table_id = index / 2;
+ table_end = (index + group_count) / 2;
+ odd_num = index % 2;
+ nbl_hw_rd_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), (u8 *)&rss_ret,
+ sizeof(rss_ret));
+
+ if (indir) {
+ if (odd_num) {
+ rss_ret.vld1 = 1;
+ rss_ret.dqueue1 = indir[queue_id++];
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id),
+ (u8 *)&rss_ret, sizeof(rss_ret));
+ table_id++;
+ }
+
+ for (; table_id < table_end; table_id++) {
+ rss_ret.vld0 = 1;
+ rss_ret.dqueue0 = indir[queue_id++];
+ rss_ret.vld1 = 1;
+ rss_ret.dqueue1 = indir[queue_id++];
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id),
+ (u8 *)&rss_ret, sizeof(rss_ret));
+ }
+
+ nbl_hw_rd_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id),
+ (u8 *)&rss_ret, sizeof(rss_ret));
+
+ if (odd_num) {
+ rss_ret.vld0 = 1;
+ rss_ret.dqueue0 = indir[queue_id++];
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id),
+ (u8 *)&rss_ret, sizeof(rss_ret));
+ }
+ } else {
+ if (odd_num) {
+ rss_ret.vld1 = 1;
+ rss_ret.dqueue1 = queue_list[queue_id++];
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id),
+ (u8 *)&rss_ret, sizeof(rss_ret));
+ table_id++;
+ }
+
+ queue_id = queue_id % q_num;
+ for (; table_id < table_end; table_id++) {
+ rss_ret.vld0 = 1;
+ rss_ret.dqueue0 = queue_list[queue_id++];
+ queue_id = queue_id % q_num;
+ rss_ret.vld1 = 1;
+ rss_ret.dqueue1 = queue_list[queue_id++];
+ queue_id = queue_id % q_num;
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id),
+ (u8 *)&rss_ret, sizeof(rss_ret));
+ }
+
+ nbl_hw_rd_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id),
+ (u8 *)&rss_ret, sizeof(rss_ret));
+
+ if (odd_num) {
+ rss_ret.vld0 = 1;
+ rss_ret.dqueue0 = queue_list[queue_id++];
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id),
+ (u8 *)&rss_ret, sizeof(rss_ret));
+ }
+ }
+
+ return 0;
+}
+
+static struct nbl_epro_rss_key epro_rss_key_def = {
+ .key0 = 0x6d5a6d5a6d5a6d5a,
+ .key1 = 0x6d5a6d5a6d5a6d5a,
+ .key2 = 0x6d5a6d5a6d5a6d5a,
+ .key3 = 0x6d5a6d5a6d5a6d5a,
+ .key4 = 0x6d5a6d5a6d5a6d5a,
+};
+
+static int nbl_hw_init_epro_rss_key(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_KEY_REG, (u8 *)&epro_rss_key_def,
+ sizeof(epro_rss_key_def));
+
+ return 0;
+}
+
+static int nbl_hw_init_epro_vpt_tbl(void *priv, u16 vsi_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_epro_vpt_tbl epro_vpt_tbl = { 0 };
+
+ epro_vpt_tbl.vld = 1;
+ epro_vpt_tbl.fwd = NBL_EPRO_FWD_TYPE_DROP;
+ epro_vpt_tbl.rss_alg_sel = NBL_EPRO_RSS_ALG_TOEPLITZ_HASH;
+ epro_vpt_tbl.rss_key_type_ipv4 = NBL_EPRO_RSS_KEY_TYPE_IPV4_L4;
+ epro_vpt_tbl.rss_key_type_ipv6 = NBL_EPRO_RSS_KEY_TYPE_IPV6_L4;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl,
+ sizeof(struct nbl_epro_vpt_tbl));
+
+ return 0;
+}
+
+static int nbl_hw_set_epro_rss_pt(void *priv, u16 vsi_id, u16 rss_ret_base,
+ u16 rss_entry_size)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_epro_rss_pt_tbl epro_rss_pt_tbl = { 0 };
+ struct nbl_epro_vpt_tbl epro_vpt_tbl;
+ u16 entry_size;
+
+ if (rss_entry_size > NBL_EPRO_RSS_ENTRY_MAX_SIZE)
+ entry_size = NBL_EPRO_RSS_ENTRY_MAX_SIZE;
+ else
+ entry_size = rss_entry_size;
+
+ epro_rss_pt_tbl.vld = 1;
+ epro_rss_pt_tbl.entry_size = entry_size;
+ epro_rss_pt_tbl.offset0_vld = 1;
+ epro_rss_pt_tbl.offset0 = rss_ret_base;
+ if (rss_entry_size > NBL_EPRO_RSS_ENTRY_MAX_SIZE) {
+ epro_rss_pt_tbl.offset1_vld = 1;
+ epro_rss_pt_tbl.offset1 =
+ rss_ret_base +
+ (NBL_EPRO_RSS_ENTRY_SIZE_UNIT << entry_size);
+ } else {
+ epro_rss_pt_tbl.offset1_vld = 0;
+ epro_rss_pt_tbl.offset1 = 0;
+ }
+
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_PT_TABLE(vsi_id),
+ (u8 *)&epro_rss_pt_tbl, sizeof(epro_rss_pt_tbl));
+
+ nbl_hw_rd_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl,
+ sizeof(epro_vpt_tbl));
+ epro_vpt_tbl.fwd = NBL_EPRO_FWD_TYPE_NORMAL;
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl,
+ sizeof(epro_vpt_tbl));
+
+ return 0;
+}
+
+static int nbl_hw_clear_epro_rss_pt(void *priv, u16 vsi_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_epro_rss_pt_tbl epro_rss_pt_tbl = { 0 };
+ struct nbl_epro_vpt_tbl epro_vpt_tbl;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_PT_TABLE(vsi_id),
+ (u8 *)&epro_rss_pt_tbl, sizeof(epro_rss_pt_tbl));
+
+ nbl_hw_rd_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl,
+ sizeof(epro_vpt_tbl));
+ epro_vpt_tbl.fwd = NBL_EPRO_FWD_TYPE_DROP;
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl,
+ sizeof(epro_vpt_tbl));
+
+ return 0;
+}
+
+static int nbl_hw_disable_dvn(void *priv, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct dvn_queue_table info = { 0 };
+
+ nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info,
+ sizeof(info));
+ info.dvn_queue_en = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_DVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info,
+ sizeof(info));
+ return 0;
+}
+
+static int nbl_hw_disable_uvn(void *priv, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct uvn_queue_table info = { 0 };
+
+ nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info,
+ sizeof(info));
+ return 0;
+}
+
+static bool nbl_hw_is_txq_drain_out(struct nbl_hw_mgt *hw_mgt, u16 queue_id,
+ struct dsch_vn_tc_q_list_tbl *tc_q_list)
+{
+ nbl_hw_rd_regs(hw_mgt, NBL_DSCH_VN_TC_Q_LIST_TABLE_REG_ARR(queue_id),
+ (u8 *)tc_q_list, sizeof(*tc_q_list));
+ if (!tc_q_list->regi && !tc_q_list->fly)
+ return true;
+
+ return false;
+}
+
+static bool nbl_hw_is_rxq_drain_out(struct nbl_hw_mgt *hw_mgt, u16 queue_id)
+{
+ struct uvn_desc_cxt cache_ctx = { 0 };
+
+ nbl_hw_rd_regs(hw_mgt, NBL_UVN_DESC_CXT_TABLE_ARR(queue_id),
+ (u8 *)&cache_ctx, sizeof(cache_ctx));
+ if (cache_ctx.cache_pref_num_prev == cache_ctx.cache_pref_num_post)
+ return true;
+
+ return false;
+}
+
+static int nbl_hw_lso_dsch_drain(void *priv, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct dsch_vn_tc_q_list_tbl tc_q_list = { 0 };
+ struct dsch_vn_q2tc_cfg_tbl info;
+ int i = 0;
+
+ nbl_hw_rd_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id),
+ (u8 *)&info, sizeof(info));
+ info.vld = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id),
+ (u8 *)&info, sizeof(info));
+ do {
+ if (nbl_hw_is_txq_drain_out(hw_mgt, queue_id, &tc_q_list))
+ break;
+
+ usleep_range(10, 20);
+ } while (++i < NBL_DRAIN_WAIT_TIMES);
+
+ if (i >= NBL_DRAIN_WAIT_TIMES) {
+ nbl_err(common,
+ "nbl queue %u lso dsch drain, regi %u, fly %u, vld %u\n",
+ queue_id, tc_q_list.regi, tc_q_list.fly, tc_q_list.vld);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int nbl_hw_rsc_cache_drain(void *priv, u16 queue_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ int i = 0;
+
+ do {
+ if (nbl_hw_is_rxq_drain_out(hw_mgt, queue_id))
+ break;
+
+ usleep_range(10, 20);
+ } while (++i < NBL_DRAIN_WAIT_TIMES);
+
+ if (i >= NBL_DRAIN_WAIT_TIMES) {
+ nbl_err(common, "nbl queue %u rsc cache drain timeout\n",
+ queue_id);
+ return -1;
+ }
+
+ return 0;
+}
+
+static u16 nbl_hw_save_dvn_ctx(void *priv, u16 queue_id, u16 split)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct dvn_queue_context dvn_ctx = { 0 };
+
+ nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_CXT_TABLE_ARR(queue_id),
+ (u8 *)&dvn_ctx, sizeof(dvn_ctx));
+
+ nbl_debug(common, "DVNQ save ctx: %d packed: %08x %08x split: %08x\n",
+ queue_id, dvn_ctx.dvn_ring_wrap_counter,
+ dvn_ctx.dvn_l1_ring_read, dvn_ctx.dvn_avail_ring_idx);
+
+ if (split)
+ return (dvn_ctx.dvn_avail_ring_idx);
+ else
+ return (dvn_ctx.dvn_l1_ring_read & 0x7FFF) |
+ (dvn_ctx.dvn_ring_wrap_counter << 15);
+}
+
+static u16 nbl_hw_save_uvn_ctx(void *priv, u16 queue_id, u16 split,
+ u16 queue_size)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt);
+ struct uvn_queue_cxt queue_cxt = { 0 };
+ struct uvn_desc_cxt desc_cxt = { 0 };
+ u16 cache_diff, queue_head, wrap_count;
+
+ nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_CXT_TABLE_ARR(queue_id),
+ (u8 *)&queue_cxt, sizeof(queue_cxt));
+ nbl_hw_rd_regs(hw_mgt, NBL_UVN_DESC_CXT_TABLE_ARR(queue_id),
+ (u8 *)&desc_cxt, sizeof(desc_cxt));
+
+ nbl_debug(common,
+ "UVN save ctx: %d cache_tail: %08x cache_head %08x queue_head: %08x\n",
+ queue_id, desc_cxt.cache_tail, desc_cxt.cache_head,
+ queue_cxt.queue_head);
+
+ cache_diff = (desc_cxt.cache_tail - desc_cxt.cache_head + 64) & (0x3F);
+ queue_head = (queue_cxt.queue_head - cache_diff + 65536) & (0xFFFF);
+ if (queue_size)
+ wrap_count = !((queue_head / queue_size) & 0x1);
+ else
+ return 0xffff;
+
+ nbl_debug(common, "UVN save ctx: %d packed: %08x %08x split: %08x\n",
+ queue_id, wrap_count, queue_head, queue_head);
+
+ if (split)
+ return (queue_head);
+ else
+ return (queue_head & 0x7FFF) | (wrap_count << 15);
+}
+
+static void nbl_hw_setup_queue_switch(void *priv, u16 eth_id)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_ipro_upsport_tbl upsport = { 0 };
+ struct nbl_epro_ept_tbl ept_tbl = { 0 };
+ struct dsch_vn_g2p_cfg_tbl info = { 0 };
+
+ upsport.hw_flow = 1;
+ upsport.entry_vld = 1;
+ upsport.set_dport_en = 1;
+ upsport.set_dport_pri = 0;
+ upsport.vlan_layer_num_0 = 3;
+ upsport.vlan_layer_num_1 = 3;
+ /* default we close promisc */
+ upsport.set_dport.data = 0xFFF;
+
+ ept_tbl.vld = 1;
+ ept_tbl.fwd = 1;
+
+ info.vld = 1;
+ info.port = (eth_id << 1);
+
+ nbl_hw_wr_regs(hw_mgt, NBL_IPRO_UP_SPORT_TABLE(eth_id), (u8 *)&upsport,
+ sizeof(upsport));
+
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_EPT_TABLE(eth_id), (u8 *)&ept_tbl,
+ sizeof(struct nbl_epro_ept_tbl));
+
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_G2P_CFG_TABLE_REG_ARR(eth_id),
+ (u8 *)&info, sizeof(info));
+}
+
+static void nbl_hw_init_pfc(void *priv, u8 ether_ports)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_dqm_rxmac_tx_port_bp_en_cfg dqm_port_bp_en = { 0 };
+ struct nbl_dqm_rxmac_tx_cos_bp_en_cfg dqm_cos_bp_en = { 0 };
+ struct nbl_uqm_rx_cos_bp_en_cfg uqm_rx_cos_bp_en = { 0 };
+ struct nbl_uqm_tx_cos_bp_en_cfg uqm_tx_cos_bp_en = { 0 };
+ struct nbl_ustore_port_fc_th ustore_port_fc_th = { 0 };
+ struct nbl_ustore_cos_fc_th ustore_cos_fc_th = { 0 };
+ struct nbl_epro_port_pri_mdf_en_cfg pri_mdf_en_cfg = { 0 };
+ struct nbl_epro_cos_map cos_map = { 0 };
+ struct nbl_upa_pri_sel_conf sel_conf = { 0 };
+ struct nbl_upa_pri_conf conf_table = { 0 };
+ int i, j;
+
+ /* DQM */
+ /* set default bp_mode: port */
+ /* TX bp: dqm send received ETH RX Pause to DSCH */
+ /* dqm rxmac_tx_port_bp_en */
+ dqm_port_bp_en.eth0 = 1;
+ dqm_port_bp_en.eth1 = 1;
+ dqm_port_bp_en.eth2 = 1;
+ dqm_port_bp_en.eth3 = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_DQM_RXMAC_TX_PORT_BP_EN,
+ (u8 *)(&dqm_port_bp_en), sizeof(dqm_port_bp_en));
+
+ /* TX bp: dqm donot send received ETH RX PFC to DSCH */
+ /* dqm rxmac_tx_cos_bp_en */
+ dqm_cos_bp_en.eth0 = 0;
+ dqm_cos_bp_en.eth1 = 0;
+ dqm_cos_bp_en.eth2 = 0;
+ dqm_cos_bp_en.eth3 = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_DQM_RXMAC_TX_COS_BP_EN,
+ (u8 *)(&dqm_cos_bp_en), sizeof(dqm_cos_bp_en));
+
+ /* UQM */
+ /* RX bp: uqm receive loopback/emp/rdma_e/rdma_h/l4s_e/l4s_h port bp */
+ /* uqm rx_port_bp_en_cfg is ok */
+ /* RX bp: uqm receive loopback/emp/rdma_e/rdma_h/l4s_e/l4s_h port bp */
+ /* uqm tx_port_bp_en_cfg is ok */
+
+ /* RX bp: uqm receive loopback/emp/rdma_e/rdma_h/l4s_e/l4s_h cos bp */
+ /* uqm rx_cos_bp_en */
+ uqm_rx_cos_bp_en.vld_l = 0xFFFFFFFF;
+ uqm_rx_cos_bp_en.vld_h = 0xFFFF;
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_RX_COS_BP_EN, (u8 *)(&uqm_rx_cos_bp_en),
+ sizeof(uqm_rx_cos_bp_en));
+
+ /* RX bp: uqm send received loopback/emp/rdma_e/rdma_h/l4s_e/l4s_h cos
+ * bp to USTORE
+ */
+ /* uqm tx_cos_bp_en */
+ uqm_tx_cos_bp_en.vld_l = 0xFFFFFFFF;
+ uqm_tx_cos_bp_en.vld_h = 0xFF;
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_TX_COS_BP_EN, (u8 *)(&uqm_tx_cos_bp_en),
+ sizeof(uqm_tx_cos_bp_en));
+
+ /* TX bp: DSCH dp0-3 response to DQM dp0-3 pfc/port bp */
+ /* dsch_dpt_pfc_map_vnh default value is ok */
+ /* TX bp: DSCH response to DQM cos bp, pkt_cos -> sch_cos map table */
+ /* dsch vn_host_dpx_prixx_p2s_map_cfg is ok */
+
+ /* downstream: enable modify packet pri */
+ /* epro port_pri_mdf_en */
+ pri_mdf_en_cfg.eth0 = 0;
+ pri_mdf_en_cfg.eth1 = 0;
+ pri_mdf_en_cfg.eth2 = 0;
+ pri_mdf_en_cfg.eth3 = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_PORT_PRI_MDF_EN,
+ (u8 *)(&pri_mdf_en_cfg), sizeof(pri_mdf_en_cfg));
+
+ for (i = 0; i < ether_ports; i++) {
+ /* set default bp_mode: port */
+ /* RX bp: USTORE port bp th, enable send pause frame */
+ /* ustore port_fc_th */
+ ustore_port_fc_th.xoff_th = 0x190;
+ ustore_port_fc_th.xon_th = 0x190;
+ ustore_port_fc_th.fc_set = 0;
+ ustore_port_fc_th.fc_en = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_USTORE_PORT_FC_TH_REG_ARR(i),
+ (u8 *)(&ustore_port_fc_th),
+ sizeof(ustore_port_fc_th));
+
+ for (j = 0; j < 8; j++) {
+ /* RX bp: ustore cos bp th, disable send pfc frame */
+ /* ustore cos_fc_th */
+ ustore_cos_fc_th.xoff_th = 0x64;
+ ustore_cos_fc_th.xon_th = 0x64;
+ ustore_cos_fc_th.fc_set = 0;
+ ustore_cos_fc_th.fc_en = 0;
+ nbl_hw_wr_regs(hw_mgt,
+ NBL_USTORE_COS_FC_TH_REG_ARR(i * 8 + j),
+ (u8 *)(&ustore_cos_fc_th),
+ sizeof(ustore_cos_fc_th));
+
+ /* downstream: sch_cos->pkt_cos or sch_cos->dscp */
+ /* epro sch_cos_map */
+ cos_map.pkt_cos = j;
+ cos_map.dscp = j << 3;
+ nbl_hw_wr_regs(hw_mgt, NBL_EPRO_SCH_COS_MAP_TABLE(i, j),
+ (u8 *)(&cos_map), sizeof(cos_map));
+ }
+ }
+
+ /* upstream: pkt dscp/802.1p -> sch_cos */
+ for (i = 0; i < ether_ports; i++) {
+ /* upstream: when pfc_mode is 802.1p,
+ * vlan pri -> sch_cos map table
+ */
+ /* upa pri_conf_table */
+ conf_table.pri0 = 0;
+ conf_table.pri1 = 1;
+ conf_table.pri2 = 2;
+ conf_table.pri3 = 3;
+ conf_table.pri4 = 4;
+ conf_table.pri5 = 5;
+ conf_table.pri6 = 6;
+ conf_table.pri7 = 7;
+ nbl_hw_wr_regs(hw_mgt, NBL_UPA_PRI_CONF_TABLE(i * 8),
+ (u8 *)(&conf_table), sizeof(conf_table));
+
+ /* upstream: set default pfc_mode is 802.1p, use outer vlan */
+ /* upa pri_sel_conf */
+ sel_conf.pri_sel = (1 << 4 | 1 << 3);
+ nbl_hw_wr_regs(hw_mgt, NBL_UPA_PRI_SEL_CONF_TABLE(i),
+ (u8 *)(&sel_conf), sizeof(sel_conf));
+ }
+}
+
static void nbl_hw_enable_mailbox_irq(void *priv, u16 func_id, bool enable_msix,
u16 global_vec_id)
{
@@ -291,6 +1215,37 @@ static u32 nbl_hw_get_fw_eth_map(void *priv)
}
static struct nbl_hw_ops hw_ops = {
+ .init_qid_map_table = nbl_hw_init_qid_map_table,
+ .set_qid_map_table = nbl_hw_set_qid_map_table,
+ .set_qid_map_ready = nbl_hw_set_qid_map_ready,
+ .cfg_ipro_queue_tbl = nbl_hw_cfg_ipro_queue_tbl,
+ .cfg_ipro_dn_sport_tbl = nbl_hw_cfg_ipro_dn_sport_tbl,
+ .set_vnet_queue_info = nbl_hw_set_vnet_queue_info,
+ .clear_vnet_queue_info = nbl_hw_clear_vnet_queue_info,
+ .reset_dvn_cfg = nbl_hw_reset_dvn_cfg,
+ .reset_uvn_cfg = nbl_hw_reset_uvn_cfg,
+ .restore_dvn_context = nbl_hw_restore_dvn_context,
+ .restore_uvn_context = nbl_hw_restore_uvn_context,
+
+ .cfg_tx_queue = nbl_hw_cfg_tx_queue,
+ .cfg_rx_queue = nbl_hw_cfg_rx_queue,
+ .cfg_q2tc_netid = nbl_hw_cfg_q2tc_netid,
+ .active_shaping = nbl_hw_active_shaping,
+ .deactive_shaping = nbl_hw_deactive_shaping,
+ .cfg_dsch_net_to_group = nbl_hw_cfg_dsch_net_to_group,
+ .init_epro_rss_key = nbl_hw_init_epro_rss_key,
+ .init_epro_vpt_tbl = nbl_hw_init_epro_vpt_tbl,
+ .cfg_epro_rss_ret = nbl_hw_cfg_epro_rss_ret,
+ .set_epro_rss_pt = nbl_hw_set_epro_rss_pt,
+ .clear_epro_rss_pt = nbl_hw_clear_epro_rss_pt,
+ .disable_dvn = nbl_hw_disable_dvn,
+ .disable_uvn = nbl_hw_disable_uvn,
+ .lso_dsch_drain = nbl_hw_lso_dsch_drain,
+ .rsc_cache_drain = nbl_hw_rsc_cache_drain,
+ .save_dvn_ctx = nbl_hw_save_dvn_ctx,
+ .save_uvn_ctx = nbl_hw_save_uvn_ctx,
+ .setup_queue_switch = nbl_hw_setup_queue_switch,
+ .init_pfc = nbl_hw_init_pfc,
.configure_msix_map = nbl_hw_configure_msix_map,
.configure_msix_info = nbl_hw_configure_msix_info,
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c
new file mode 100644
index 000000000000..a140ffafa20c
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c
@@ -0,0 +1,1129 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include <linux/if_bridge.h>
+#include "nbl_queue_leonis.h"
+#include "nbl_resource_leonis.h"
+
+static struct nbl_queue_vsi_info *
+nbl_res_queue_get_vsi_info(struct nbl_resource_mgt *res_mgt, u16 vsi_id)
+{
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info;
+ u16 func_id;
+ int i;
+
+ func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ queue_info = &queue_mgt->queue_info[func_id];
+
+ for (i = 0; i < NBL_VSI_MAX; i++)
+ if (queue_info->vsi_info[i].vsi_id == vsi_id)
+ return &queue_info->vsi_info[i];
+
+ return NULL;
+}
+
+static int nbl_res_queue_get_net_id(u16 func_id, u16 vsi_type)
+{
+ int net_id;
+
+ switch (vsi_type) {
+ case NBL_VSI_DATA:
+ net_id = func_id + NBL_SPECIFIC_VSI_NET_ID_OFFSET;
+ break;
+ default:
+ net_id = func_id;
+ break;
+ }
+
+ return net_id;
+}
+
+static int nbl_res_queue_setup_queue_info(struct nbl_resource_mgt *res_mgt,
+ u16 func_id, u16 num_queues)
+{
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+ u16 *txrx_queues, *queues_context;
+ u32 *uvn_stat_pkt_drop;
+ u16 queue_index;
+ int i, ret = 0;
+
+ nbl_debug(common, "Setup qid map, func_id:%d, num_queues:%d", func_id,
+ num_queues);
+
+ txrx_queues = kcalloc(num_queues, sizeof(txrx_queues[0]), GFP_ATOMIC);
+ if (!txrx_queues) {
+ ret = -ENOMEM;
+ goto alloc_txrx_queues_fail;
+ }
+
+ queues_context =
+ kcalloc(num_queues * 2, sizeof(txrx_queues[0]), GFP_ATOMIC);
+ if (!queues_context) {
+ ret = -ENOMEM;
+ goto alloc_queue_contex_fail;
+ }
+
+ uvn_stat_pkt_drop =
+ kcalloc(num_queues, sizeof(*uvn_stat_pkt_drop), GFP_ATOMIC);
+ if (!uvn_stat_pkt_drop) {
+ ret = -ENOMEM;
+ goto alloc_uvn_stat_pkt_drop_fail;
+ }
+
+ queue_info->num_txrx_queues = num_queues;
+ queue_info->txrx_queues = txrx_queues;
+ queue_info->queues_context = queues_context;
+ queue_info->uvn_stat_pkt_drop = uvn_stat_pkt_drop;
+
+ for (i = 0; i < num_queues; i++) {
+ queue_index = find_first_zero_bit(queue_mgt->txrx_queue_bitmap,
+ NBL_MAX_TXRX_QUEUE);
+ if (queue_index == NBL_MAX_TXRX_QUEUE) {
+ ret = -ENOSPC;
+ goto get_txrx_queue_fail;
+ }
+ txrx_queues[i] = queue_index;
+ set_bit(queue_index, queue_mgt->txrx_queue_bitmap);
+ }
+ return 0;
+
+get_txrx_queue_fail:
+ kfree(uvn_stat_pkt_drop);
+ while (--i + 1) {
+ queue_index = txrx_queues[i];
+ clear_bit(queue_index, queue_mgt->txrx_queue_bitmap);
+ }
+ queue_info->num_txrx_queues = 0;
+ queue_info->txrx_queues = NULL;
+alloc_uvn_stat_pkt_drop_fail:
+ kfree(queues_context);
+alloc_queue_contex_fail:
+ kfree(txrx_queues);
+alloc_txrx_queues_fail:
+ return ret;
+}
+
+static void nbl_res_queue_remove_queue_info(struct nbl_resource_mgt *res_mgt,
+ u16 func_id)
+{
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+ u16 i;
+
+ for (i = 0; i < queue_info->num_txrx_queues; i++)
+ clear_bit(queue_info->txrx_queues[i],
+ queue_mgt->txrx_queue_bitmap);
+
+ kfree(queue_info->txrx_queues);
+ kfree(queue_info->queues_context);
+ kfree(queue_info->uvn_stat_pkt_drop);
+ queue_info->txrx_queues = NULL;
+ queue_info->queues_context = NULL;
+ queue_info->uvn_stat_pkt_drop = NULL;
+
+ queue_info->num_txrx_queues = 0;
+}
+
+static u64 nbl_res_queue_qid_map_key(struct nbl_qid_map_table *map)
+{
+ return ((u64)map->notify_addr_h
+ << NBL_QID_MAP_NOTIFY_ADDR_LOW_PART_LEN) |
+ (u64)map->notify_addr_l;
+}
+
+static void nbl_res_queue_set_qid_map_table(struct nbl_resource_mgt *res_mgt,
+ u16 tail)
+{
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_qid_map_param param;
+ int i;
+
+ param.qid_map = kcalloc(tail, sizeof(param.qid_map[0]), GFP_ATOMIC);
+ if (!param.qid_map)
+ return;
+
+ for (i = 0; i < tail; i++)
+ param.qid_map[i] = queue_mgt->qid_map_table[i];
+
+ param.start = 0;
+ param.len = tail;
+
+ hw_ops->set_qid_map_table(NBL_RES_MGT_TO_HW_PRIV(res_mgt), ¶m,
+ queue_mgt->qid_map_select);
+ queue_mgt->qid_map_select = !queue_mgt->qid_map_select;
+
+ if (!queue_mgt->qid_map_ready) {
+ hw_ops->set_qid_map_ready(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ true);
+ queue_mgt->qid_map_ready = true;
+ }
+
+ kfree(param.qid_map);
+}
+
+int nbl_res_queue_setup_qid_map_table_leonis(struct nbl_resource_mgt *res_mgt,
+ u16 func_id, u64 notify_addr)
+{
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+ struct nbl_qid_map_table qid_map;
+ u16 *txrx_queues = queue_info->txrx_queues;
+ u16 qid_map_entries = queue_info->num_txrx_queues, qid_map_base, tail;
+ u64 key, tmp;
+ int i;
+
+ /* Get base location */
+ queue_info->notify_addr = notify_addr;
+ key = notify_addr >> NBL_QID_MAP_NOTIFY_ADDR_SHIFT;
+
+ for (i = 0; i < NBL_QID_MAP_TABLE_ENTRIES; i++) {
+ tmp = nbl_res_queue_qid_map_key(&queue_mgt->qid_map_table[i]);
+ WARN_ON(key == tmp);
+ if (key < tmp) {
+ qid_map_base = i;
+ break;
+ }
+ }
+ if (i == NBL_QID_MAP_TABLE_ENTRIES) {
+ nbl_err(common, "No valid qid map key for func %d", func_id);
+ return -ENOSPC;
+ }
+
+ /* Calc tail, we will set the qid_map from 0 to tail.
+ * We have to make sure that this range (0, tail) can cover all the
+ * changes, which need to consider all the two tables. Therefore, it is
+ * necessary to store each table's tail, and always use the larger one
+ * between this table's tail and the added tail.
+ *
+ * The reason can be illustrated in the following example:
+ * Step 1: del some entries, which happens on table 1, and each table
+ * could be
+ * Table 0: 0 - 31 used
+ * Table 1: 0 - 15 used
+ * SW : queue_mgt->total_qid_map_entries = 16
+ * Step 2: add 2 entries, which happens on table 0, if we use 16 + 2
+ * as the tail, then
+ * Table 0: 0 - 17 correctly added, 18 - 31 garbage data
+ * Table 1: 0 - 15 used
+ * SW : queue_mgt->total_qid_map_entries = 18
+ * And this is definitely wrong, it should use 32, table 0's original
+ * tail
+ */
+ queue_mgt->total_qid_map_entries += qid_map_entries;
+ tail = max(queue_mgt->total_qid_map_entries,
+ queue_mgt->qid_map_tail[queue_mgt->qid_map_select]);
+ queue_mgt->qid_map_tail[queue_mgt->qid_map_select] =
+ queue_mgt->total_qid_map_entries;
+
+ /* Update qid map */
+ for (i = NBL_QID_MAP_TABLE_ENTRIES - qid_map_entries; i > qid_map_base;
+ i--)
+ queue_mgt->qid_map_table[i - 1 + qid_map_entries] =
+ queue_mgt->qid_map_table[i - 1];
+
+ for (i = 0; i < queue_info->num_txrx_queues; i++) {
+ qid_map.local_qid = 2 * i + 1;
+ qid_map.notify_addr_l = key;
+ qid_map.notify_addr_h = key >>
+ NBL_QID_MAP_NOTIFY_ADDR_LOW_PART_LEN;
+ qid_map.global_qid = txrx_queues[i];
+ qid_map.ctrlq_flag = 0;
+ queue_mgt->qid_map_table[qid_map_base + i] = qid_map;
+ }
+
+ nbl_res_queue_set_qid_map_table(res_mgt, tail);
+
+ return 0;
+}
+
+void nbl_res_queue_remove_qid_map_table_leonis(struct nbl_resource_mgt *res_mgt,
+ u16 func_id)
+{
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+ struct nbl_qid_map_table qid_map;
+ u64 key;
+ u16 qid_map_entries = queue_info->num_txrx_queues, qid_map_base, tail;
+ int i;
+
+ /* Get base location */
+ key = queue_info->notify_addr >> NBL_QID_MAP_NOTIFY_ADDR_SHIFT;
+
+ for (i = 0; i < NBL_QID_MAP_TABLE_ENTRIES; i++) {
+ if (key ==
+ nbl_res_queue_qid_map_key(&queue_mgt->qid_map_table[i])) {
+ qid_map_base = i;
+ break;
+ }
+ }
+ if (i == NBL_QID_MAP_TABLE_ENTRIES) {
+ nbl_err(common, "No valid qid map key for func %d", func_id);
+ return;
+ }
+
+ /* Calc tail, we will set the qid_map from 0 to tail.
+ * We have to make sure that this range (0, tail) can cover all the
+ * changes, which need to consider all the two tables. Therefore, it
+ * is necessary to store each table's tail, and always use the larger
+ * one between this table's tail and the driver-stored tail.
+ *
+ * The reason can be illustrated in the following example:
+ * Step 1: del some entries, which happens on table 1, and each table
+ * could be
+ * Table 0: 0 - 31 used
+ * Table 1: 0 - 15 used
+ * SW : queue_mgt->total_qid_map_entries = 16
+ * Step 2: del 2 entries, which happens on table 0, if we use 16 as
+ * the tail, then
+ * Table 0: 0 - 13 correct, 14 - 31 garbage data
+ * Table 1: 0 - 15 used
+ * SW : queue_mgt->total_qid_map_entries = 14
+ * And this is definitely wrong, it should use 32, table 0's original
+ * tail
+ */
+ tail = max(queue_mgt->total_qid_map_entries,
+ queue_mgt->qid_map_tail[queue_mgt->qid_map_select]);
+ queue_mgt->total_qid_map_entries -= qid_map_entries;
+ queue_mgt->qid_map_tail[queue_mgt->qid_map_select] =
+ queue_mgt->total_qid_map_entries;
+
+ /* Update qid map */
+ memset(&qid_map, U8_MAX, sizeof(qid_map));
+
+ for (i = qid_map_base; i < NBL_QID_MAP_TABLE_ENTRIES - qid_map_entries;
+ i++)
+ queue_mgt->qid_map_table[i] =
+ queue_mgt->qid_map_table[i + qid_map_entries];
+ for (; i < NBL_QID_MAP_TABLE_ENTRIES; i++)
+ queue_mgt->qid_map_table[i] = qid_map;
+
+ nbl_res_queue_set_qid_map_table(res_mgt, tail);
+}
+
+static int nbl_res_queue_get_rss_ret_base(struct nbl_resource_mgt *res_mgt,
+ u16 count, u16 rss_entry_size,
+ struct nbl_queue_vsi_info *vsi_info)
+{
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ u32 rss_ret_base_start;
+ u32 rss_ret_base_end;
+ u16 func_id;
+ u16 rss_entry_count;
+ u16 index, i, j, k;
+ int success = 1;
+ int ret = -EFAULT;
+
+ func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_info->vsi_id);
+ if (func_id < NBL_MAX_ETHERNET &&
+ vsi_info->vsi_index == NBL_VSI_DATA) {
+ rss_ret_base_start = 0;
+ rss_ret_base_end = NBL_EPRO_PF_RSS_RET_TBL_DEPTH;
+ vsi_info->rss_entry_size = NBL_EPRO_PF_RSS_ENTRY_SIZE;
+ rss_entry_count = NBL_EPRO_PF_RSS_RET_TBL_COUNT;
+ } else {
+ rss_ret_base_start = NBL_EPRO_PF_RSS_RET_TBL_DEPTH;
+ rss_ret_base_end = NBL_EPRO_RSS_RET_TBL_DEPTH;
+ vsi_info->rss_entry_size = rss_entry_size;
+ rss_entry_count = count;
+ }
+
+ for (i = rss_ret_base_start; i < rss_ret_base_end;) {
+ index = find_next_zero_bit(queue_mgt->rss_ret_bitmap,
+ rss_ret_base_end, i);
+ if (index == rss_ret_base_end) {
+ nbl_err(common, "There is no available rss ret left");
+ break;
+ }
+
+ success = 1;
+ for (j = index + 1; j < (index + rss_entry_count); j++) {
+ if (j >= rss_ret_base_end) {
+ success = 0;
+ break;
+ }
+
+ if (test_bit(j, queue_mgt->rss_ret_bitmap)) {
+ success = 0;
+ break;
+ }
+ }
+ if (success) {
+ for (k = index; k < (index + rss_entry_count); k++)
+ set_bit(k, queue_mgt->rss_ret_bitmap);
+ vsi_info->rss_ret_base = index;
+ ret = 0;
+ break;
+ }
+ i = j;
+ }
+
+ return ret;
+}
+
+static int nbl_res_queue_setup_q2vsi(void *priv, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_queue_info *queue_info = NULL;
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ u16 func_id;
+ u16 qid;
+ int ret = 0, i;
+
+ func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ queue_info = &queue_mgt->queue_info[func_id];
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return -ENOENT;
+
+ /* config ipro queue tbl */
+ for (i = vsi_info->queue_offset;
+ i < vsi_info->queue_offset + vsi_info->queue_num &&
+ i < queue_info->num_txrx_queues;
+ i++) {
+ qid = queue_info->txrx_queues[i];
+ ret = hw_ops->cfg_ipro_queue_tbl(p, qid, vsi_id, 1);
+ if (ret) {
+ while (--i + 1)
+ hw_ops->cfg_ipro_queue_tbl(p, qid, 0, 0);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static void nbl_res_queue_remove_q2vsi(void *priv, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_queue_info *queue_info = NULL;
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+ u16 func_id;
+ int i;
+
+ func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ queue_info = &queue_mgt->queue_info[func_id];
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return;
+
+ /*config ipro queue tbl*/
+ for (i = vsi_info->queue_offset;
+ i < vsi_info->queue_offset + vsi_info->queue_num &&
+ i < queue_info->num_txrx_queues;
+ i++)
+ hw_ops->cfg_ipro_queue_tbl(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ queue_info->txrx_queues[i], 0, 0);
+}
+
+static int nbl_res_queue_setup_rss(void *priv, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+ u16 rss_entry_size, count;
+ int ret = 0;
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return -ENOENT;
+
+ rss_entry_size =
+ (vsi_info->queue_num + NBL_EPRO_RSS_ENTRY_SIZE_UNIT - 1) /
+ NBL_EPRO_RSS_ENTRY_SIZE_UNIT;
+
+ rss_entry_size = ilog2(roundup_pow_of_two(rss_entry_size));
+ count = NBL_EPRO_RSS_ENTRY_SIZE_UNIT << rss_entry_size;
+
+ ret = nbl_res_queue_get_rss_ret_base(res_mgt, count, rss_entry_size,
+ vsi_info);
+ if (ret)
+ return -ENOSPC;
+
+ vsi_info->rss_vld = true;
+
+ return 0;
+}
+
+static void nbl_res_queue_remove_rss(void *priv, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+ u16 rss_ret_base, rss_entry_size, count;
+ int i;
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return;
+
+ if (!vsi_info->rss_vld)
+ return;
+
+ rss_ret_base = vsi_info->rss_ret_base;
+ rss_entry_size = vsi_info->rss_entry_size;
+ count = NBL_EPRO_RSS_ENTRY_SIZE_UNIT << rss_entry_size;
+
+ for (i = rss_ret_base; i < (rss_ret_base + count); i++)
+ clear_bit(i, queue_mgt->rss_ret_bitmap);
+
+ vsi_info->rss_vld = false;
+}
+
+static void
+nbl_res_queue_setup_queue_cfg(struct nbl_queue_mgt *queue_mgt,
+ struct nbl_queue_cfg_param *cfg_param,
+ struct nbl_txrx_queue_param *queue_param,
+ bool is_tx, u16 func_id)
+{
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+
+ cfg_param->desc = queue_param->dma;
+ cfg_param->size = queue_param->desc_num;
+ cfg_param->global_vector = queue_param->global_vec_id;
+ cfg_param->global_queue_id =
+ queue_info->txrx_queues[queue_param->local_queue_id];
+
+ cfg_param->avail = queue_param->avail;
+ cfg_param->used = queue_param->used;
+ cfg_param->extend_header = queue_param->extend_header;
+ cfg_param->split = queue_param->split;
+ cfg_param->last_avail_idx = queue_param->cxt;
+
+ cfg_param->intr_en = queue_param->intr_en;
+ cfg_param->intr_mask = queue_param->intr_mask;
+
+ cfg_param->tx = is_tx;
+ cfg_param->rxcsum = queue_param->rxcsum;
+ cfg_param->half_offload_en = queue_param->half_offload_en;
+}
+
+static void nbl_res_queue_update_netid_refnum(struct nbl_queue_mgt *queue_mgt,
+ u16 net_id, bool add)
+{
+ if (net_id >= NBL_MAX_NET_ID)
+ return;
+
+ if (add) {
+ queue_mgt->net_id_ref_vsinum[net_id]++;
+ } else {
+ /* probe call clear_queue first, so judge nor zero to support
+ * disable dsch more than once
+ */
+ if (queue_mgt->net_id_ref_vsinum[net_id])
+ queue_mgt->net_id_ref_vsinum[net_id]--;
+ }
+}
+
+static u16 nbl_res_queue_get_netid_refnum(struct nbl_queue_mgt *queue_mgt,
+ u16 net_id)
+{
+ if (net_id >= NBL_MAX_NET_ID)
+ return 0;
+
+ return queue_mgt->net_id_ref_vsinum[net_id];
+}
+
+static void nbl_res_queue_setup_hw_dq(struct nbl_resource_mgt *res_mgt,
+ struct nbl_queue_cfg_param *queue_cfg,
+ u16 func_id, u16 vsi_id)
+{
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+ struct nbl_queue_vsi_info *vsi_info;
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_vnet_queue_info_param param = {0};
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ u16 global_qid = queue_cfg->global_queue_id;
+ u8 bus, dev, func;
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return;
+
+ nbl_res_func_id_to_bdf(res_mgt, func_id, &bus, &dev, &func);
+ queue_info->split = queue_cfg->split;
+ queue_info->queue_size = queue_cfg->size;
+
+ param.function_id = func;
+ param.device_id = dev;
+ param.bus_id = bus;
+ param.valid = 1;
+
+ if (queue_cfg->intr_en) {
+ param.msix_idx = queue_cfg->global_vector;
+ param.msix_idx_valid = 1;
+ }
+
+ if (queue_cfg->tx) {
+ hw_ops->set_vnet_queue_info(p, ¶m,
+ NBL_PAIR_ID_GET_TX(global_qid));
+ hw_ops->reset_dvn_cfg(p, global_qid);
+ if (!queue_cfg->extend_header)
+ hw_ops->restore_dvn_context(p, global_qid,
+ queue_cfg->split,
+ queue_cfg->last_avail_idx);
+ hw_ops->cfg_tx_queue(p, queue_cfg, global_qid);
+ if (nbl_res_queue_get_netid_refnum(queue_mgt, vsi_info->net_id))
+ hw_ops->cfg_q2tc_netid(p, global_qid,
+ vsi_info->net_id, 1);
+
+ } else {
+ hw_ops->set_vnet_queue_info(p, ¶m,
+ NBL_PAIR_ID_GET_RX(global_qid));
+ hw_ops->reset_uvn_cfg(p, global_qid);
+
+ if (!queue_cfg->extend_header)
+ hw_ops->restore_uvn_context(p, global_qid,
+ queue_cfg->split,
+ queue_cfg->last_avail_idx);
+ hw_ops->cfg_rx_queue(p, queue_cfg, global_qid);
+ }
+}
+
+static void nbl_res_queue_remove_all_hw_dq(struct nbl_resource_mgt *res_mgt,
+ u16 func_id,
+ struct nbl_queue_vsi_info *vsi_info)
+{
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ u16 start = vsi_info->queue_offset,
+ end = vsi_info->queue_offset + vsi_info->queue_num;
+ u16 global_queue;
+ int i;
+
+ for (i = start; i < end; i++) {
+ global_queue = queue_info->txrx_queues[i];
+
+ hw_ops->lso_dsch_drain(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_queue);
+ hw_ops->disable_dvn(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_queue);
+ }
+
+ for (i = start; i < end; i++) {
+ global_queue = queue_info->txrx_queues[i];
+
+ hw_ops->disable_uvn(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_queue);
+ hw_ops->rsc_cache_drain(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_queue);
+ }
+
+ for (i = start; i < end; i++) {
+ global_queue = queue_info->txrx_queues[i];
+ queue_info->queues_context[NBL_PAIR_ID_GET_RX(i)] =
+ hw_ops->save_uvn_ctx(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_queue, queue_info->split,
+ queue_info->queue_size);
+ queue_info->queues_context[NBL_PAIR_ID_GET_TX(i)] =
+ hw_ops->save_dvn_ctx(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_queue, queue_info->split);
+ }
+
+ for (i = start; i < end; i++) {
+ global_queue = queue_info->txrx_queues[i];
+ hw_ops->reset_uvn_cfg(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_queue);
+ hw_ops->reset_dvn_cfg(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ global_queue);
+ }
+
+ for (i = start; i < end; i++) {
+ global_queue = queue_info->txrx_queues[i];
+ hw_ops->clear_vnet_queue_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ NBL_PAIR_ID_GET_RX(global_queue));
+ hw_ops->clear_vnet_queue_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ NBL_PAIR_ID_GET_TX(global_queue));
+ }
+}
+
+int nbl_res_queue_init_qid_map_table(struct nbl_resource_mgt *res_mgt,
+ struct nbl_queue_mgt *queue_mgt,
+ struct nbl_hw_ops *hw_ops)
+{
+ struct nbl_qid_map_table invalid_qid_map;
+ u16 i;
+
+ queue_mgt->qid_map_ready = 0;
+ queue_mgt->qid_map_select = NBL_MASTER_QID_MAP_TABLE;
+
+ memset(&invalid_qid_map, 0, sizeof(invalid_qid_map));
+ invalid_qid_map.local_qid = 0x1FF;
+ invalid_qid_map.notify_addr_l = 0x7FFFFF;
+ invalid_qid_map.notify_addr_h = 0xFFFFFFFF;
+ invalid_qid_map.global_qid = 0xFFF;
+ invalid_qid_map.ctrlq_flag = 0X1;
+
+ for (i = 0; i < NBL_QID_MAP_TABLE_ENTRIES; i++)
+ queue_mgt->qid_map_table[i] = invalid_qid_map;
+
+ hw_ops->init_qid_map_table(NBL_RES_MGT_TO_HW_PRIV(res_mgt));
+
+ return 0;
+}
+
+static int nbl_res_queue_init_epro_rss_key(struct nbl_resource_mgt *res_mgt,
+ struct nbl_hw_ops *hw_ops)
+{
+ int ret = 0;
+
+ ret = hw_ops->init_epro_rss_key(NBL_RES_MGT_TO_HW_PRIV(res_mgt));
+ return ret;
+}
+
+static int nbl_res_queue_init_epro_vpt_table(struct nbl_resource_mgt *res_mgt,
+ u16 func_id)
+{
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ int pfid, vfid;
+ u16 vsi_id;
+ u16 i;
+
+ vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id,
+ NBL_VSI_SERV_PF_DATA_TYPE);
+ nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pfid, &vfid);
+
+ /* init pf vsi */
+ for (i = NBL_VSI_SERV_PF_DATA_TYPE;
+ i <= NBL_VSI_SERV_PF_DATA_TYPE; i++) {
+ vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, i);
+ hw_ops->init_epro_vpt_tbl(p, vsi_id);
+ }
+
+ return 0;
+}
+
+static int
+nbl_res_queue_init_ipro_dn_sport_tbl(struct nbl_resource_mgt *res_mgt,
+ u16 func_id, u16 bmode, bool binit)
+
+{
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ int pfid, vfid;
+ u16 eth_id, vsi_id;
+ int i;
+
+ vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id,
+ NBL_VSI_SERV_PF_DATA_TYPE);
+ nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pfid, &vfid);
+
+ eth_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi_id);
+
+ for (i = 0; i < NBL_VSI_MAX; i++)
+ hw_ops->cfg_ipro_dn_sport_tbl(p, vsi_id + i, eth_id,
+ bmode, binit);
+ return 0;
+}
+
+static int nbl_res_queue_init_rss(struct nbl_resource_mgt *res_mgt,
+ struct nbl_queue_mgt *queue_mgt,
+ struct nbl_hw_ops *hw_ops)
+{
+ return nbl_res_queue_init_epro_rss_key(res_mgt, hw_ops);
+}
+
+static int nbl_res_queue_alloc_txrx_queues(void *priv, u16 vsi_id,
+ u16 queue_num)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ u64 notify_addr;
+ u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ int ret = 0;
+
+ notify_addr = nbl_res_get_func_bar_base_addr(res_mgt, func_id);
+
+ ret = nbl_res_queue_setup_queue_info(res_mgt, func_id, queue_num);
+ if (ret)
+ goto setup_queue_info_fail;
+
+ ret = nbl_res_queue_setup_qid_map_table_leonis(res_mgt, func_id,
+ notify_addr);
+ if (ret)
+ goto setup_qid_map_fail;
+
+ return 0;
+
+setup_qid_map_fail:
+ nbl_res_queue_remove_queue_info(res_mgt, func_id);
+setup_queue_info_fail:
+ return ret;
+}
+
+static void nbl_res_queue_free_txrx_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+
+ nbl_res_queue_remove_qid_map_table_leonis(res_mgt, func_id);
+ nbl_res_queue_remove_queue_info(res_mgt, func_id);
+}
+
+static int nbl_res_queue_setup_queue(void *priv,
+ struct nbl_txrx_queue_param *param,
+ bool is_tx)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_queue_cfg_param cfg_param = { 0 };
+ u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, param->vsi_id);
+
+ nbl_res_queue_setup_queue_cfg(NBL_RES_MGT_TO_QUEUE_MGT(res_mgt),
+ &cfg_param, param, is_tx, func_id);
+ nbl_res_queue_setup_hw_dq(res_mgt, &cfg_param, func_id, param->vsi_id);
+ return 0;
+}
+
+static void nbl_res_queue_remove_all_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return;
+
+ nbl_res_queue_remove_all_hw_dq(res_mgt, func_id, vsi_info);
+}
+
+static int nbl_res_queue_register_vsi2q(void *priv, u16 vsi_index, u16 vsi_id,
+ u16 queue_offset, u16 queue_num)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = NULL;
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+ u16 func_id;
+
+ func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ queue_info = &queue_mgt->queue_info[func_id];
+ vsi_info = &queue_info->vsi_info[vsi_index];
+
+ memset(vsi_info, 0, sizeof(*vsi_info));
+ vsi_info->vld = 1;
+ vsi_info->vsi_index = vsi_index;
+ vsi_info->vsi_id = vsi_id;
+ vsi_info->queue_offset = queue_offset;
+ vsi_info->queue_num = queue_num;
+ vsi_info->net_id =
+ nbl_res_queue_get_net_id(func_id, vsi_info->vsi_index);
+
+ return 0;
+}
+
+static int nbl_res_queue_cfg_dsch(void *priv, u16 vsi_id, bool vld)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_queue_vsi_info *vsi_info;
+ /* group_id is same with eth_id */
+ u16 group_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi_id);
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ u16 start = 0, end = 0;
+ int i, ret = 0;
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return -ENOENT;
+
+ start = vsi_info->queue_offset;
+ end = vsi_info->queue_num + vsi_info->queue_offset;
+
+ /* When setting up, g2p -> n2g -> q2tc; when down, q2tc -> n2g -> g2p */
+ if (!vld) {
+ hw_ops->deactive_shaping(p,
+ vsi_info->net_id);
+ for (i = start; i < end; i++)
+ hw_ops->cfg_q2tc_netid(p,
+ queue_info->txrx_queues[i],
+ vsi_info->net_id, vld);
+ nbl_res_queue_update_netid_refnum(queue_mgt, vsi_info->net_id,
+ false);
+ }
+
+ if (!nbl_res_queue_get_netid_refnum(queue_mgt, vsi_info->net_id)) {
+ ret = hw_ops->cfg_dsch_net_to_group(p, vsi_info->net_id,
+ group_id, vld);
+ if (ret)
+ return ret;
+ }
+
+ if (vld) {
+ for (i = start; i < end; i++)
+ hw_ops->cfg_q2tc_netid(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ queue_info->txrx_queues[i],
+ vsi_info->net_id, vld);
+ hw_ops->active_shaping(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ vsi_info->net_id);
+ nbl_res_queue_update_netid_refnum(queue_mgt, vsi_info->net_id,
+ true);
+ }
+
+ return 0;
+}
+
+static int nbl_res_queue_setup_cqs(void *priv, u16 vsi_id, u16 real_qps,
+ bool rss_indir_set)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info;
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ void *q_list;
+ u16 func_id;
+
+ func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ queue_info = &queue_mgt->queue_info[func_id];
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return -ENOENT;
+
+ if (real_qps == vsi_info->curr_qps)
+ return 0;
+
+ if (real_qps && rss_indir_set) {
+ q_list = queue_info->txrx_queues + vsi_info->queue_offset;
+ hw_ops->cfg_epro_rss_ret(p, vsi_info->rss_ret_base,
+ vsi_info->rss_entry_size, real_qps,
+ q_list, NULL);
+ }
+
+ if (!vsi_info->curr_qps)
+ hw_ops->set_epro_rss_pt(p, vsi_id, vsi_info->rss_ret_base,
+ vsi_info->rss_entry_size);
+
+ vsi_info->curr_qps = real_qps;
+ vsi_info->curr_qps_static = real_qps;
+ return 0;
+}
+
+static void nbl_res_queue_remove_cqs(void *priv, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return;
+
+ hw_ops->clear_epro_rss_pt(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id);
+
+ vsi_info->curr_qps = 0;
+}
+
+static int nbl_res_queue_init_switch(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt);
+ int i;
+
+ for_each_set_bit(i, eth_info->eth_bitmap, NBL_MAX_ETHERNET)
+ hw_ops->setup_queue_switch(NBL_RES_MGT_TO_HW_PRIV(res_mgt), i);
+
+ return 0;
+}
+
+static int nbl_res_queue_init(void *priv)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_queue_mgt *queue_mgt;
+ struct nbl_hw_ops *hw_ops;
+ int i, ret = 0;
+
+ if (!res_mgt)
+ return -EINVAL;
+
+ queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+
+ ret = nbl_res_queue_init_qid_map_table(res_mgt, queue_mgt, hw_ops);
+ if (ret)
+ goto init_queue_fail;
+
+ ret = nbl_res_queue_init_rss(res_mgt, queue_mgt, hw_ops);
+ if (ret)
+ goto init_queue_fail;
+
+ ret = nbl_res_queue_init_switch(res_mgt);
+ if (ret)
+ goto init_queue_fail;
+
+ for (i = 0; i < NBL_RES_MGT_TO_PF_NUM(res_mgt); i++) {
+ nbl_res_queue_init_epro_vpt_table(res_mgt, i);
+ nbl_res_queue_init_ipro_dn_sport_tbl(res_mgt, i,
+ BRIDGE_MODE_VEB, true);
+ }
+ hw_ops->init_pfc(NBL_RES_MGT_TO_HW_PRIV(res_mgt), NBL_MAX_ETHERNET);
+
+ return 0;
+
+init_queue_fail:
+ return ret;
+}
+
+static u16 nbl_res_queue_get_vsi_global_qid(void *priv, u16 vsi_id,
+ u16 local_qid)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+
+ if (!queue_info->num_txrx_queues)
+ return 0xffff;
+
+ return queue_info->txrx_queues[local_qid];
+}
+
+static void nbl_res_queue_get_rxfh_indir_size(void *priv, u16 vsi_id,
+ u32 *rxfh_indir_size)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return;
+
+ *rxfh_indir_size = NBL_EPRO_RSS_ENTRY_SIZE_UNIT
+ << vsi_info->rss_entry_size;
+}
+
+static int nbl_res_queue_set_rxfh_indir(void *priv, u16 vsi_id,
+ const u32 *indir, u32 indir_size)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_queue_vsi_info *vsi_info = NULL;
+ u32 *rss_ret;
+ u16 local_id;
+ int i = 0;
+
+ vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id);
+ if (!vsi_info)
+ return -ENOENT;
+
+ if (indir) {
+ rss_ret = kcalloc(indir_size, sizeof(indir[0]), GFP_KERNEL);
+ if (!rss_ret)
+ return -ENOMEM;
+ /* local queue to global queue */
+ for (i = 0; i < indir_size; i++) {
+ local_id = vsi_info->queue_offset + indir[i];
+ rss_ret[i] =
+ nbl_res_queue_get_vsi_global_qid(res_mgt,
+ vsi_id,
+ local_id);
+ }
+ hw_ops->cfg_epro_rss_ret(NBL_RES_MGT_TO_HW_PRIV(res_mgt),
+ vsi_info->rss_ret_base,
+ vsi_info->rss_entry_size, 0, NULL,
+ rss_ret);
+ kfree(rss_ret);
+ }
+
+ if (!vsi_info->curr_qps)
+ hw_ops->set_epro_rss_pt(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id,
+ vsi_info->rss_ret_base,
+ vsi_info->rss_entry_size);
+
+ return 0;
+}
+
+static void nbl_res_queue_clear_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id);
+ struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+ struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id];
+
+ nbl_res_queue_remove_rss(priv, vsi_id);
+ nbl_res_queue_remove_q2vsi(priv, vsi_id);
+ if (!queue_info->num_txrx_queues)
+ return;
+ nbl_res_queue_remove_cqs(res_mgt, vsi_id);
+ nbl_res_queue_cfg_dsch(res_mgt, vsi_id, false);
+ nbl_res_queue_remove_all_queues(res_mgt, vsi_id);
+ nbl_res_queue_free_txrx_queues(res_mgt, vsi_id);
+}
+
+/* NBL_QUEUE_SET_OPS(ops_name, func)
+ *
+ * Use X Macros to reduce setup and remove codes.
+ */
+#define NBL_QUEUE_OPS_TBL \
+do { \
+ NBL_QUEUE_SET_OPS(alloc_txrx_queues, \
+ nbl_res_queue_alloc_txrx_queues); \
+ NBL_QUEUE_SET_OPS(free_txrx_queues, \
+ nbl_res_queue_free_txrx_queues); \
+ NBL_QUEUE_SET_OPS(register_vsi2q, nbl_res_queue_register_vsi2q);\
+ NBL_QUEUE_SET_OPS(setup_q2vsi, nbl_res_queue_setup_q2vsi); \
+ NBL_QUEUE_SET_OPS(remove_q2vsi, nbl_res_queue_remove_q2vsi); \
+ NBL_QUEUE_SET_OPS(setup_rss, nbl_res_queue_setup_rss); \
+ NBL_QUEUE_SET_OPS(remove_rss, nbl_res_queue_remove_rss); \
+ NBL_QUEUE_SET_OPS(setup_queue, nbl_res_queue_setup_queue); \
+ NBL_QUEUE_SET_OPS(remove_all_queues, nbl_res_queue_remove_all_queues);\
+ NBL_QUEUE_SET_OPS(cfg_dsch, nbl_res_queue_cfg_dsch); \
+ NBL_QUEUE_SET_OPS(setup_cqs, nbl_res_queue_setup_cqs); \
+ NBL_QUEUE_SET_OPS(remove_cqs, nbl_res_queue_remove_cqs); \
+ NBL_QUEUE_SET_OPS(queue_init, nbl_res_queue_init); \
+ NBL_QUEUE_SET_OPS(get_rxfh_indir_size, \
+ nbl_res_queue_get_rxfh_indir_size); \
+ NBL_QUEUE_SET_OPS(set_rxfh_indir, nbl_res_queue_set_rxfh_indir);\
+ NBL_QUEUE_SET_OPS(clear_queues, nbl_res_queue_clear_queues); \
+} while (0)
+
+int nbl_queue_setup_ops_leonis(struct nbl_resource_ops *res_ops)
+{
+#define NBL_QUEUE_SET_OPS(name, func) \
+ do { \
+ res_ops->NBL_NAME(name) = func; \
+ ; \
+ } while (0)
+ NBL_QUEUE_OPS_TBL;
+#undef NBL_QUEUE_SET_OPS
+
+ return 0;
+}
+
+void nbl_queue_remove_ops_leonis(struct nbl_resource_ops *res_ops)
+{
+#define NBL_QUEUE_SET_OPS(name, func) \
+do { \
+ (void)(func); \
+ res_ops->NBL_NAME(name) = NULL; ; \
+} while (0)
+ NBL_QUEUE_OPS_TBL;
+#undef NBL_QUEUE_SET_OPS
+}
+
+void nbl_queue_mgt_init_leonis(struct nbl_queue_mgt *queue_mgt)
+{
+ queue_mgt->qid_map_select = NBL_MASTER_QID_MAP_TABLE;
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h
new file mode 100644
index 000000000000..396df65d7088
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_QUEUE_LEONIS_H_
+#define _NBL_QUEUE_LEONIS_H_
+
+#include "nbl_resource.h"
+
+#define NBL_QID_MAP_NOTIFY_ADDR_SHIFT (9)
+#define NBL_QID_MAP_NOTIFY_ADDR_LOW_PART_LEN (23)
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
index 16e5941aaf76..032131d1b9fa 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
@@ -171,6 +171,10 @@ static struct nbl_resource_ops res_ops = {
};
static struct nbl_res_product_ops product_ops = {
+ .queue_mgt_init = nbl_queue_mgt_init_leonis,
+ .setup_qid_map_table = nbl_res_queue_setup_qid_map_table_leonis,
+ .remove_qid_map_table = nbl_res_queue_remove_qid_map_table_leonis,
+ .init_qid_map_table = nbl_res_queue_init_qid_map_table,
};
static bool is_ops_inited;
@@ -229,6 +233,9 @@ static int nbl_res_setup_ops(struct device *dev,
return -ENOMEM;
if (!is_ops_inited) {
+ ret = nbl_queue_setup_ops_leonis(&res_ops);
+ if (ret)
+ goto setup_fail;
ret = nbl_intr_setup_ops(&res_ops);
if (ret)
goto setup_fail;
@@ -430,7 +437,9 @@ static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis)
{
struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt;
+ nbl_queue_mgt_stop(res_mgt);
nbl_intr_mgt_stop(res_mgt);
+
nbl_res_ctrl_dev_remove_vsi_info(res_mgt);
nbl_res_ctrl_dev_remove_eth_info(res_mgt);
nbl_res_ctrl_dev_sriov_info_remove(res_mgt);
@@ -470,6 +479,10 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis,
if (ret)
goto start_fail;
+ ret = nbl_queue_mgt_start(res_mgt);
+ if (ret)
+ goto start_fail;
+
ret = nbl_intr_mgt_start(res_mgt);
if (ret)
goto start_fail;
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
index a0a25a2b71ee..3763c33db00f 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
@@ -10,4 +10,16 @@
#include "nbl_resource.h"
#define NBL_MAX_PF_LEONIS 8
+
+int nbl_queue_setup_ops_leonis(struct nbl_resource_ops *resource_ops);
+void nbl_queue_remove_ops_leonis(struct nbl_resource_ops *resource_ops);
+
+void nbl_queue_mgt_init_leonis(struct nbl_queue_mgt *queue_mgt);
+int nbl_res_queue_setup_qid_map_table_leonis(struct nbl_resource_mgt *res_mgt,
+ u16 func_id, u64 notify_addr);
+void nbl_res_queue_remove_qid_map_table_leonis(struct nbl_resource_mgt *res_mgt,
+ u16 func_id);
+int nbl_res_queue_init_qid_map_table(struct nbl_resource_mgt *res_mgt,
+ struct nbl_queue_mgt *queue_mgt,
+ struct nbl_hw_ops *hw_ops);
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c
new file mode 100644
index 000000000000..35c2e34b30b6
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#include "nbl_queue.h"
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_queue_setup_mgt(struct device *dev,
+ struct nbl_queue_mgt **queue_mgt)
+{
+ *queue_mgt =
+ devm_kzalloc(dev, sizeof(struct nbl_queue_mgt), GFP_KERNEL);
+ if (!*queue_mgt)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void nbl_queue_remove_mgt(struct device *dev,
+ struct nbl_queue_mgt **queue_mgt)
+{
+ devm_kfree(dev, *queue_mgt);
+ *queue_mgt = NULL;
+}
+
+int nbl_queue_mgt_start(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev;
+ struct nbl_queue_mgt **queue_mgt;
+ struct nbl_res_product_ops *product_ops =
+ NBL_RES_MGT_TO_PROD_OPS(res_mgt);
+ int ret = 0;
+
+ dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ queue_mgt = &NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+
+ ret = nbl_queue_setup_mgt(dev, queue_mgt);
+ if (ret)
+ return ret;
+
+ NBL_OPS_CALL(product_ops->queue_mgt_init, (*queue_mgt));
+
+ return 0;
+}
+
+void nbl_queue_mgt_stop(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev;
+ struct nbl_queue_mgt **queue_mgt;
+
+ dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ queue_mgt = &NBL_RES_MGT_TO_QUEUE_MGT(res_mgt);
+
+ if (!(*queue_mgt))
+ return;
+
+ nbl_queue_remove_mgt(dev, queue_mgt);
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h
new file mode 100644
index 000000000000..94a5b27f1bcb
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_QUEUE_H_
+#define _NBL_QUEUE_H_
+
+#include "nbl_resource.h"
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
index b8653e0039d7..2195e4084040 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
@@ -422,6 +422,10 @@ u8 nbl_res_vsi_id_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id);
int nbl_intr_mgt_start(struct nbl_resource_mgt *res_mgt);
void nbl_intr_mgt_stop(struct nbl_resource_mgt *res_mgt);
int nbl_intr_setup_ops(struct nbl_resource_ops *resource_ops);
+
+int nbl_queue_mgt_start(struct nbl_resource_mgt *res_mgt);
+void nbl_queue_mgt_stop(struct nbl_resource_mgt *res_mgt);
+
void nbl_res_pf_dev_vsi_type_to_hw_vsi_type(u16 src_type,
enum nbl_vsi_serv_type *dst_type);
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
index 977f2bba3d65..0252082c55dc 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
@@ -10,6 +10,54 @@
#include "nbl_include.h"
struct nbl_hw_ops {
+ int (*init_chip_module)(void *priv, u8 eth_speed, u8 eth_num);
+ void (*deinit_chip_module)(void *priv);
+ int (*init_qid_map_table)(void *priv);
+ int (*set_qid_map_table)(void *priv, void *data, int qid_map_select);
+ int (*set_qid_map_ready)(void *priv, bool ready);
+ int (*cfg_ipro_queue_tbl)(void *priv, u16 queue_id, u16 vsi_id,
+ u8 enable);
+ int (*cfg_ipro_dn_sport_tbl)(void *priv, u16 vsi_id, u16 dst_eth_id,
+ u16 bmode, bool binit);
+ int (*set_vnet_queue_info)(void *priv,
+ struct nbl_vnet_queue_info_param *param,
+ u16 queue_id);
+ int (*clear_vnet_queue_info)(void *priv, u16 queue_id);
+ int (*reset_dvn_cfg)(void *priv, u16 queue_id);
+ int (*reset_uvn_cfg)(void *priv, u16 queue_id);
+ int (*restore_dvn_context)(void *priv, u16 queue_id, u16 split,
+ u16 last_avail_index);
+ int (*restore_uvn_context)(void *priv, u16 queue_id, u16 split,
+ u16 last_avail_index);
+
+ int (*cfg_tx_queue)(void *priv, void *data, u16 queue_id);
+ int (*cfg_rx_queue)(void *priv, void *data, u16 queue_id);
+
+ int (*cfg_q2tc_netid)(void *priv, u16 queue_id, u16 netid, u16 vld);
+
+ void (*active_shaping)(void *priv, u16 func_id);
+ void (*deactive_shaping)(void *priv, u16 func_id);
+
+ int (*cfg_dsch_net_to_group)(void *priv, u16 func_id, u16 group_id,
+ u16 vld);
+ int (*init_epro_rss_key)(void *priv);
+
+ int (*init_epro_vpt_tbl)(void *priv, u16 vsi_id);
+ int (*cfg_epro_rss_ret)(void *priv, u32 index, u8 size_type, u32 q_num,
+ u16 *queue_list, const u32 *indir);
+ int (*set_epro_rss_pt)(void *priv, u16 vsi_id, u16 rss_ret_base,
+ u16 rss_entry_size);
+ int (*clear_epro_rss_pt)(void *priv, u16 vsi_id);
+ int (*disable_dvn)(void *priv, u16 queue_id);
+ int (*disable_uvn)(void *priv, u16 queue_id);
+ int (*lso_dsch_drain)(void *priv, u16 queue_id);
+ int (*rsc_cache_drain)(void *priv, u16 queue_id);
+ u16 (*save_dvn_ctx)(void *priv, u16 queue_id, u16 split);
+ u16 (*save_uvn_ctx)(void *priv, u16 queue_id, u16 split,
+ u16 queue_size);
+ void (*setup_queue_switch)(void *priv, u16 eth_id);
+ void (*init_pfc)(void *priv, u8 ether_ports);
+ void (*set_promisc_mode)(void *priv, u16 vsi_id, u16 eth_id, u16 mode);
void (*configure_msix_map)(void *priv, u16 func_id, bool valid,
dma_addr_t dma_addr, u8 bus, u8 devid,
u8 function);
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
index 9bc90fe824bd..1379567fb847 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
@@ -16,6 +16,9 @@ struct nbl_resource_pt_ops {
};
struct nbl_resource_ops {
+ int (*init_chip_module)(void *priv);
+ void (*deinit_chip_module)(void *priv);
+ int (*queue_init)(void *priv);
int (*configure_msix_map)(void *priv, u16 func_id, u16 num_net_msix,
u16 num_others_msix, bool net_msix_mask_en);
int (*destroy_msix_map)(void *priv, u16 func_id);
@@ -27,11 +30,35 @@ struct nbl_resource_ops {
struct nbl_register_net_param *register_param,
struct nbl_register_net_result *register_result);
int (*unregister_net)(void *priv, u16 func_id);
+ int (*alloc_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num);
+ void (*free_txrx_queues)(void *priv, u16 vsi_id);
+ int (*register_vsi2q)(void *priv, u16 vsi_index, u16 vsi_id,
+ u16 queue_offset, u16 queue_num);
+ int (*setup_q2vsi)(void *priv, u16 vsi_id);
+ void (*remove_q2vsi)(void *priv, u16 vsi_id);
+ int (*setup_rss)(void *priv, u16 vsi_id);
+ void (*remove_rss)(void *priv, u16 vsi_id);
+ int (*setup_queue)(void *priv, struct nbl_txrx_queue_param *param,
+ bool is_tx);
+ void (*remove_all_queues)(void *priv, u16 vsi_id);
+ int (*cfg_dsch)(void *priv, u16 vsi_id, bool vld);
+ int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps,
+ bool rss_indir_set);
+ void (*remove_cqs)(void *priv, u16 vsi_id);
+ void (*clear_queues)(void *priv, u16 vsi_id);
+
+ u16 (*get_global_queue_id)(void *priv, u16 vsi_id, u16 local_queue_id);
+
u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id,
u32 *irq_data);
u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type);
void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id,
u8 *logic_eth_id);
+ int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode);
+ void (*get_rxfh_indir_size)(void *priv, u16 vsi_id,
+ u32 *rxfh_indir_size);
+ int (*set_rxfh_indir)(void *priv, u16 vsi_id, const u32 *indir,
+ u32 indir_size);
};
struct nbl_resource_ops_tbl {
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index 651100e59f0a..d4e18f2a2931 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -87,6 +87,15 @@ struct nbl_qid_map_param {
u16 len;
};
+struct nbl_vnet_queue_info_param {
+ u32 function_id;
+ u32 device_id;
+ u32 bus_id;
+ u32 msix_idx;
+ u32 msix_idx_valid;
+ u32 valid;
+};
+
struct nbl_queue_cfg_param {
/* queue args*/
u64 desc;
@@ -180,6 +189,25 @@ enum nbl_fw_port_speed {
NBL_FW_PORT_SPEED_100G,
};
+#define NBL_OPS_CALL(func, para) \
+do { \
+ typeof(func) _func = (func); \
+ if (_func) \
+ _func para; \
+} while (0)
+
+#define NBL_OPS_CALL_RET(func, para) \
+({ \
+ typeof(func) _func = (func); \
+ _func ? _func para : 0; \
+})
+
+#define NBL_OPS_CALL_RET_PTR(func, para) \
+({ \
+ typeof(func) _func = (func); \
+ _func ? _func para : NULL; \
+})
+
enum {
NBL_NETIF_F_SG_BIT, /* Scatter/gather IO. */
NBL_NETIF_F_IP_CSUM_BIT, /* csum TCP/UDP over IPv4 */
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 08/15] net/nebula-matrix: add vsi resource implementation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (6 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 07/15] net/nebula-matrix: add queue " illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 09/15] net/nebula-matrix: add txrx resource interfaces and tx func illusion.wang
` (7 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
The HW (Hardware) layer code can have a quick review since it is
highly chip-specific.
Chip initialization includes the initialization of the DP module, the
intf module, and the P4 registers.
The initialization of the DP module encompasses the initialization of
the dped(downstream pkt edit), uped(upstream pkt edit), dsch(downstream
schedule), ustore, dstore, dvn, uvn, and uqm modules.
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/Makefile | 1 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 424 ++++++++++++++++++
.../nbl_hw_leonis/nbl_resource_leonis.c | 10 +
.../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 4 +
.../nebula-matrix/nbl/nbl_hw/nbl_vsi.c | 120 +++++
.../nebula-matrix/nbl/nbl_hw/nbl_vsi.h | 12 +
6 files changed, 571 insertions(+)
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
index 1e873a4cd31e..f2a68245944a 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -13,6 +13,7 @@ nbl_core-objs += nbl_common/nbl_common.o \
nbl_hw/nbl_resource.o \
nbl_hw/nbl_interrupt.o \
nbl_hw/nbl_queue.o \
+ nbl_hw/nbl_vsi.o \
nbl_core/nbl_dispatch.o \
nbl_core/nbl_service.o \
nbl_core/nbl_dev.o \
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
index 9b3ba97428a9..43bb9d4af67a 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
@@ -25,6 +25,406 @@ static u32 nbl_hw_get_quirks(void *priv)
return quirks;
}
+static void nbl_configure_dped_checksum(struct nbl_hw_mgt *hw_mgt)
+{
+ union dped_l4_ck_cmd_40_u l4_ck_cmd_40;
+
+ /* DPED dped_l4_ck_cmd_40 for sctp */
+ nbl_hw_rd_regs(hw_mgt, NBL_DPED_L4_CK_CMD_40_ADDR, (u8 *)&l4_ck_cmd_40,
+ sizeof(l4_ck_cmd_40));
+ l4_ck_cmd_40.info.en = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_DPED_L4_CK_CMD_40_ADDR, (u8 *)&l4_ck_cmd_40,
+ sizeof(l4_ck_cmd_40));
+}
+
+static int nbl_dped_init(struct nbl_hw_mgt *hw_mgt)
+{
+ nbl_hw_wr32(hw_mgt, NBL_DPED_VLAN_OFFSET, 0xC);
+ nbl_hw_wr32(hw_mgt, NBL_DPED_DSCP_OFFSET_0, 0x8);
+ nbl_hw_wr32(hw_mgt, NBL_DPED_DSCP_OFFSET_1, 0x4);
+
+ // dped checksum offload
+ nbl_configure_dped_checksum(hw_mgt);
+
+ return 0;
+}
+
+static int nbl_uped_init(struct nbl_hw_mgt *hw_mgt)
+{
+ struct ped_hw_edit_profile hw_edit;
+
+ nbl_hw_rd_regs(hw_mgt, NBL_UPED_HW_EDT_PROF_TABLE(5), (u8 *)&hw_edit,
+ sizeof(hw_edit));
+ hw_edit.l3_len = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_UPED_HW_EDT_PROF_TABLE(5), (u8 *)&hw_edit,
+ sizeof(hw_edit));
+
+ nbl_hw_rd_regs(hw_mgt, NBL_UPED_HW_EDT_PROF_TABLE(6), (u8 *)&hw_edit,
+ sizeof(hw_edit));
+ hw_edit.l3_len = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_UPED_HW_EDT_PROF_TABLE(6), (u8 *)&hw_edit,
+ sizeof(hw_edit));
+
+ return 0;
+}
+
+static void nbl_shaping_eth_init(struct nbl_hw_mgt *hw_mgt, u8 eth_id, u8 speed)
+{
+ struct nbl_shaping_dport dport = { 0 };
+ struct nbl_shaping_dvn_dport dvn_dport = { 0 };
+ u32 rate, half_rate;
+
+ if (speed == NBL_FW_PORT_SPEED_100G) {
+ rate = NBL_SHAPING_DPORT_100G_RATE;
+ half_rate = NBL_SHAPING_DPORT_HALF_100G_RATE;
+ } else {
+ rate = NBL_SHAPING_DPORT_25G_RATE;
+ half_rate = NBL_SHAPING_DPORT_HALF_25G_RATE;
+ }
+
+ dport.cir = rate;
+ dport.pir = rate;
+ dport.depth = max(dport.cir * 2, NBL_LR_LEONIS_NET_BUCKET_DEPTH);
+ dport.cbs = dport.depth;
+ dport.pbs = dport.depth;
+ dport.valid = 1;
+
+ dvn_dport.cir = half_rate;
+ dvn_dport.pir = rate;
+ dvn_dport.depth = dport.depth;
+ dvn_dport.cbs = dvn_dport.depth;
+ dvn_dport.pbs = dvn_dport.depth;
+ dvn_dport.valid = 1;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_DPORT_REG(eth_id), (u8 *)&dport,
+ sizeof(dport));
+ nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_DVN_DPORT_REG(eth_id),
+ (u8 *)&dvn_dport, sizeof(dvn_dport));
+}
+
+static int nbl_shaping_init(struct nbl_hw_mgt *hw_mgt, u8 speed)
+{
+#define NBL_SHAPING_FLUSH_INTERVAL 128
+ struct dsch_psha_en psha_en = { 0 };
+ struct nbl_shaping_net net_shaping = { 0 };
+ int num = NBL_SHAPING_FLUSH_INTERVAL;
+ int i;
+
+ for (i = 0; i < NBL_MAX_ETHERNET; i++)
+ nbl_shaping_eth_init(hw_mgt, i, speed);
+
+ psha_en.en = 0xF;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_PSHA_EN_ADDR, (u8 *)&psha_en,
+ sizeof(psha_en));
+
+ for (i = 0; i < NBL_MAX_FUNC; i++) {
+ nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_NET_REG(i),
+ (u8 *)&net_shaping, sizeof(net_shaping));
+ if ((i % num) == 0)
+ nbl_flush_writes(hw_mgt);
+ }
+ nbl_flush_writes(hw_mgt);
+ return 0;
+}
+
+static int nbl_dsch_qid_max_init(struct nbl_hw_mgt *hw_mgt)
+{
+ struct dsch_vn_quanta quanta = { 0 };
+
+ quanta.h_qua = NBL_HOST_QUANTA;
+ quanta.e_qua = NBL_ECPU_QUANTA;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_QUANTA_ADDR, (u8 *)&quanta,
+ sizeof(quanta));
+ nbl_hw_wr32(hw_mgt, NBL_DSCH_HOST_QID_MAX, NBL_MAX_QUEUE_ID);
+
+ nbl_hw_wr32(hw_mgt, NBL_DVN_ECPU_QUEUE_NUM, 0);
+ nbl_hw_wr32(hw_mgt, NBL_UVN_ECPU_QUEUE_NUM, 0);
+
+ return 0;
+}
+
+static int nbl_ustore_init(struct nbl_hw_mgt *hw_mgt, u8 eth_num)
+{
+ struct ustore_pkt_len pkt_len;
+ struct nbl_ustore_port_drop_th drop_th;
+ int i;
+
+ nbl_hw_rd_regs(hw_mgt, NBL_USTORE_PKT_LEN_ADDR, (u8 *)&pkt_len,
+ sizeof(pkt_len));
+ /* min arp packet length 42 (14 + 28) */
+ pkt_len.min = 42;
+ nbl_hw_wr_regs(hw_mgt, NBL_USTORE_PKT_LEN_ADDR, (u8 *)&pkt_len,
+ sizeof(pkt_len));
+
+ drop_th.en = 1;
+ if (eth_num == 1)
+ drop_th.disc_th = NBL_USTORE_SIGNLE_ETH_DROP_TH;
+ else if (eth_num == 2)
+ drop_th.disc_th = NBL_USTORE_DUAL_ETH_DROP_TH;
+ else
+ drop_th.disc_th = NBL_USTORE_QUAD_ETH_DROP_TH;
+
+ for (i = 0; i < 4; i++)
+ nbl_hw_wr_regs(hw_mgt, NBL_USTORE_PORT_DROP_TH_REG_ARR(i),
+ (u8 *)&drop_th, sizeof(drop_th));
+
+ for (i = 0; i < NBL_MAX_ETHERNET; i++) {
+ nbl_hw_rd32(hw_mgt, NBL_USTORE_BUF_PORT_DROP_PKT(i));
+ nbl_hw_rd32(hw_mgt, NBL_USTORE_BUF_PORT_TRUN_PKT(i));
+ }
+
+ return 0;
+}
+
+static int nbl_dstore_init(struct nbl_hw_mgt *hw_mgt, u8 speed)
+{
+ struct dstore_d_dport_fc_th fc_th;
+ struct dstore_port_drop_th drop_th;
+ struct dstore_disc_bp_th bp_th;
+ int i;
+
+ for (i = 0; i < 6; i++) {
+ nbl_hw_rd_regs(hw_mgt, NBL_DSTORE_PORT_DROP_TH_REG(i),
+ (u8 *)&drop_th, sizeof(drop_th));
+ drop_th.en = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSTORE_PORT_DROP_TH_REG(i),
+ (u8 *)&drop_th, sizeof(drop_th));
+ }
+
+ nbl_hw_rd_regs(hw_mgt, NBL_DSTORE_DISC_BP_TH, (u8 *)&bp_th,
+ sizeof(bp_th));
+ bp_th.en = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSTORE_DISC_BP_TH, (u8 *)&bp_th,
+ sizeof(bp_th));
+
+ for (i = 0; i < 4; i++) {
+ nbl_hw_rd_regs(hw_mgt, NBL_DSTORE_D_DPORT_FC_TH_REG(i),
+ (u8 *)&fc_th, sizeof(fc_th));
+ if (speed == NBL_FW_PORT_SPEED_100G) {
+ fc_th.xoff_th = NBL_DSTORE_DROP_XOFF_TH_100G;
+ fc_th.xon_th = NBL_DSTORE_DROP_XON_TH_100G;
+ } else {
+ fc_th.xoff_th = NBL_DSTORE_DROP_XOFF_TH;
+ fc_th.xon_th = NBL_DSTORE_DROP_XON_TH;
+ }
+
+ fc_th.fc_en = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_DSTORE_D_DPORT_FC_TH_REG(i),
+ (u8 *)&fc_th, sizeof(fc_th));
+ }
+
+ return 0;
+}
+
+static void nbl_dvn_descreq_num_cfg(void *priv, u32 descreq_num)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ struct nbl_dvn_descreq_num_cfg descreq_num_cfg = { 0 };
+ u32 packet_ring_prefect_num = descreq_num & 0xffff;
+ u32 split_ring_prefect_num = (descreq_num >> 16) & 0xffff;
+
+ packet_ring_prefect_num =
+ packet_ring_prefect_num > 32 ? 32 : packet_ring_prefect_num;
+ packet_ring_prefect_num =
+ packet_ring_prefect_num < 8 ? 8 : packet_ring_prefect_num;
+ descreq_num_cfg.packed_l1_num = (packet_ring_prefect_num - 8) / 4;
+
+ split_ring_prefect_num =
+ split_ring_prefect_num > 16 ? 16 : split_ring_prefect_num;
+ split_ring_prefect_num =
+ split_ring_prefect_num < 8 ? 8 : split_ring_prefect_num;
+ descreq_num_cfg.avring_cfg_num = split_ring_prefect_num > 8 ? 1 : 0;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_DVN_DESCREQ_NUM_CFG, (u8 *)&descreq_num_cfg,
+ sizeof(descreq_num_cfg));
+}
+
+static int nbl_dvn_init(struct nbl_hw_mgt *hw_mgt, u8 speed)
+{
+ struct nbl_dvn_desc_wr_merge_timeout timeout = { 0 };
+ struct nbl_dvn_dif_req_rd_ro_flag ro_flag = { 0 };
+
+ timeout.cfg_cycle = DEFAULT_DVN_DESC_WR_MERGE_TIMEOUT_MAX;
+ nbl_hw_wr_regs(hw_mgt, NBL_DVN_DESC_WR_MERGE_TIMEOUT, (u8 *)&timeout,
+ sizeof(timeout));
+
+ ro_flag.rd_desc_ro_en = 1;
+ ro_flag.rd_data_ro_en = 1;
+ ro_flag.rd_avring_ro_en = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_DVN_DIF_REQ_RD_RO_FLAG, (u8 *)&ro_flag,
+ sizeof(ro_flag));
+
+ if (speed == NBL_FW_PORT_SPEED_100G)
+ nbl_dvn_descreq_num_cfg(hw_mgt,
+ DEFAULT_DVN_100G_DESCREQ_NUMCFG);
+ else
+ nbl_dvn_descreq_num_cfg(hw_mgt, DEFAULT_DVN_DESCREQ_NUMCFG);
+
+ return 0;
+}
+
+static int nbl_uvn_init(struct nbl_hw_mgt *hw_mgt)
+{
+ struct uvn_desc_prefetch_init prefetch_init = { 0 };
+ struct uvn_desc_wr_timeout desc_wr_timeout = { 0 };
+ struct uvn_queue_err_mask mask = { 0 };
+ struct uvn_dif_req_ro_flag flag = { 0 };
+ u32 timeout = 119760; /* 200us 200000/1.67 */
+ u16 wr_timeout = 0x12c;
+ u32 quirks;
+
+ nbl_hw_wr32(hw_mgt, NBL_UVN_DESC_RD_WAIT, timeout);
+
+ desc_wr_timeout.num = wr_timeout;
+ nbl_hw_wr_regs(hw_mgt, NBL_UVN_DESC_WR_TIMEOUT, (u8 *)&desc_wr_timeout,
+ sizeof(desc_wr_timeout));
+
+ flag.avail_rd = 1;
+ flag.desc_rd = 1;
+ flag.pkt_wr = 1;
+ flag.desc_wr = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_UVN_DIF_REQ_RO_FLAG, (u8 *)&flag,
+ sizeof(flag));
+
+ nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_ERR_MASK, (u8 *)&mask,
+ sizeof(mask));
+ mask.dif_err = 1;
+ nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_ERR_MASK, (u8 *)&mask,
+ sizeof(mask));
+
+ prefetch_init.num = NBL_UVN_DESC_PREFETCH_NUM;
+ prefetch_init.sel = 0;
+
+ quirks = nbl_hw_get_quirks(hw_mgt);
+
+ if (!(quirks & BIT(NBL_QUIRKS_UVN_PREFETCH_ALIGN)))
+ prefetch_init.sel = 1;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_UVN_DESC_PREFETCH_INIT, (u8 *)&prefetch_init,
+ sizeof(prefetch_init));
+
+ return 0;
+}
+
+static int nbl_uqm_init(struct nbl_hw_mgt *hw_mgt)
+{
+ struct nbl_uqm_que_type que_type = { 0 };
+ u32 cnt = 0;
+ int i;
+
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_FWD_DROP_CNT, (u8 *)&cnt, sizeof(cnt));
+
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_PKT_CNT, (u8 *)&cnt, sizeof(cnt));
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_PKT_SLICE_CNT, (u8 *)&cnt,
+ sizeof(cnt));
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_PKT_LEN_ADD_CNT, (u8 *)&cnt,
+ sizeof(cnt));
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_HEAD_PNTR_ADD_CNT, (u8 *)&cnt,
+ sizeof(cnt));
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_WEIGHT_ADD_CNT, (u8 *)&cnt,
+ sizeof(cnt));
+
+ for (i = 0; i < NBL_UQM_PORT_DROP_DEPTH; i++) {
+ nbl_hw_wr_regs(hw_mgt,
+ NBL_UQM_PORT_DROP_PKT_CNT + (sizeof(cnt) * i),
+ (u8 *)&cnt, sizeof(cnt));
+ nbl_hw_wr_regs(hw_mgt,
+ NBL_UQM_PORT_DROP_PKT_SLICE_CNT +
+ (sizeof(cnt) * i),
+ (u8 *)&cnt, sizeof(cnt));
+ nbl_hw_wr_regs(hw_mgt,
+ NBL_UQM_PORT_DROP_PKT_LEN_ADD_CNT +
+ (sizeof(cnt) * i),
+ (u8 *)&cnt, sizeof(cnt));
+ nbl_hw_wr_regs(hw_mgt,
+ NBL_UQM_PORT_DROP_HEAD_PNTR_ADD_CNT +
+ (sizeof(cnt) * i),
+ (u8 *)&cnt, sizeof(cnt));
+ nbl_hw_wr_regs(hw_mgt,
+ NBL_UQM_PORT_DROP_WEIGHT_ADD_CNT +
+ (sizeof(cnt) * i),
+ (u8 *)&cnt, sizeof(cnt));
+ }
+
+ for (i = 0; i < NBL_UQM_DPORT_DROP_DEPTH; i++)
+ nbl_hw_wr_regs(hw_mgt,
+ NBL_UQM_DPORT_DROP_CNT + (sizeof(cnt) * i),
+ (u8 *)&cnt, sizeof(cnt));
+
+ que_type.bp_drop = 0;
+ nbl_hw_wr_regs(hw_mgt, NBL_UQM_QUE_TYPE, (u8 *)&que_type,
+ sizeof(que_type));
+
+ return 0;
+}
+
+static int nbl_dp_init(struct nbl_hw_mgt *hw_mgt, u8 speed, u8 eth_num)
+{
+ nbl_dped_init(hw_mgt);
+ nbl_uped_init(hw_mgt);
+ nbl_shaping_init(hw_mgt, speed);
+ nbl_dsch_qid_max_init(hw_mgt);
+ nbl_ustore_init(hw_mgt, eth_num);
+ nbl_dstore_init(hw_mgt, speed);
+ nbl_dvn_init(hw_mgt, speed);
+ nbl_uvn_init(hw_mgt);
+ nbl_uqm_init(hw_mgt);
+
+ return 0;
+}
+
+static int nbl_host_padpt_init(struct nbl_hw_mgt *hw_mgt)
+{
+ /* padpt flow control register */
+ nbl_hw_wr32(hw_mgt, NBL_HOST_PADPT_HOST_CFG_FC_CPLH_UP, 0x10400);
+ nbl_hw_wr32(hw_mgt, NBL_HOST_PADPT_HOST_CFG_FC_PD_DN, 0x10080);
+ nbl_hw_wr32(hw_mgt, NBL_HOST_PADPT_HOST_CFG_FC_PH_DN, 0x10010);
+ nbl_hw_wr32(hw_mgt, NBL_HOST_PADPT_HOST_CFG_FC_NPH_DN, 0x10010);
+
+ return 0;
+}
+
+static int nbl_intf_init(struct nbl_hw_mgt *hw_mgt)
+{
+ nbl_host_padpt_init(hw_mgt);
+ return 0;
+}
+
+static void nbl_hw_set_driver_status(struct nbl_hw_mgt *hw_mgt, bool active)
+{
+ u32 status = 0;
+
+ status = nbl_hw_rd32(hw_mgt, NBL_DRIVER_STATUS_REG);
+
+ status = (status & ~(1 << NBL_DRIVER_STATUS_BIT)) |
+ (active << NBL_DRIVER_STATUS_BIT);
+
+ nbl_hw_wr32(hw_mgt, NBL_DRIVER_STATUS_REG, status);
+}
+
+static void nbl_hw_deinit_chip_module(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+
+ nbl_hw_set_driver_status(hw_mgt, false);
+}
+
+static int nbl_hw_init_chip_module(void *priv, u8 eth_speed, u8 eth_num)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+
+ nbl_debug(NBL_HW_MGT_TO_COMMON(hw_mgt), "hw_chip_init");
+
+ nbl_dp_init(hw_mgt, eth_speed, eth_num);
+ nbl_intf_init(hw_mgt);
+
+ nbl_write_all_regs(hw_mgt);
+ nbl_hw_set_driver_status(hw_mgt, true);
+ hw_mgt->version = nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG);
+
+ return 0;
+}
+
static int nbl_hw_init_qid_map_table(void *priv)
{
struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
@@ -1181,6 +1581,25 @@ static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus,
(u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map));
}
+static void nbl_hw_set_promisc_mode(void *priv, u16 vsi_id, u16 eth_id,
+ u16 mode)
+{
+ struct nbl_ipro_upsport_tbl upsport;
+
+ nbl_hw_rd_regs(priv, NBL_IPRO_UP_SPORT_TABLE(eth_id), (u8 *)&upsport,
+ sizeof(upsport));
+ if (mode) {
+ upsport.set_dport.dport.up.upcall_flag = AUX_FWD_TYPE_NML_FWD;
+ upsport.set_dport.dport.up.port_type = SET_DPORT_TYPE_VSI_HOST;
+ upsport.set_dport.dport.up.port_id = vsi_id;
+ upsport.set_dport.dport.up.next_stg_sel = NEXT_STG_SEL_NONE;
+ } else {
+ upsport.set_dport.data = 0xFFF;
+ }
+ nbl_hw_wr_regs(priv, NBL_IPRO_UP_SPORT_TABLE(eth_id), (u8 *)&upsport,
+ sizeof(upsport));
+}
+
static void nbl_hw_get_board_info(void *priv,
struct nbl_board_port_info *board_info)
{
@@ -1215,6 +1634,8 @@ static u32 nbl_hw_get_fw_eth_map(void *priv)
}
static struct nbl_hw_ops hw_ops = {
+ .init_chip_module = nbl_hw_init_chip_module,
+ .deinit_chip_module = nbl_hw_deinit_chip_module,
.init_qid_map_table = nbl_hw_init_qid_map_table,
.set_qid_map_table = nbl_hw_set_qid_map_table,
.set_qid_map_ready = nbl_hw_set_qid_map_ready,
@@ -1238,6 +1659,7 @@ static struct nbl_hw_ops hw_ops = {
.cfg_epro_rss_ret = nbl_hw_cfg_epro_rss_ret,
.set_epro_rss_pt = nbl_hw_set_epro_rss_pt,
.clear_epro_rss_pt = nbl_hw_clear_epro_rss_pt,
+ .set_promisc_mode = nbl_hw_set_promisc_mode,
.disable_dvn = nbl_hw_disable_dvn,
.disable_uvn = nbl_hw_disable_uvn,
.lso_dsch_drain = nbl_hw_lso_dsch_drain,
@@ -1264,10 +1686,12 @@ static struct nbl_hw_ops hw_ops = {
.enable_mailbox_irq = nbl_hw_enable_mailbox_irq,
.enable_msix_irq = nbl_hw_enable_msix_irq,
.get_msix_irq_enable_info = nbl_hw_get_msix_irq_enable_info,
+
.get_fw_eth_num = nbl_hw_get_fw_eth_num,
.get_fw_eth_map = nbl_hw_get_fw_eth_map,
.get_board_info = nbl_hw_get_board_info,
.get_quirks = nbl_hw_get_quirks,
+
};
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
index 032131d1b9fa..83f8bb280b93 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
@@ -237,6 +237,10 @@ static int nbl_res_setup_ops(struct device *dev,
if (ret)
goto setup_fail;
ret = nbl_intr_setup_ops(&res_ops);
+ if (ret)
+ goto setup_fail;
+
+ ret = nbl_vsi_setup_ops(&res_ops);
if (ret)
goto setup_fail;
is_ops_inited = true;
@@ -440,6 +444,8 @@ static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis)
nbl_queue_mgt_stop(res_mgt);
nbl_intr_mgt_stop(res_mgt);
+ nbl_vsi_mgt_stop(res_mgt);
+
nbl_res_ctrl_dev_remove_vsi_info(res_mgt);
nbl_res_ctrl_dev_remove_eth_info(res_mgt);
nbl_res_ctrl_dev_sriov_info_remove(res_mgt);
@@ -483,6 +489,10 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis,
if (ret)
goto start_fail;
+ ret = nbl_vsi_mgt_start(res_mgt);
+ if (ret)
+ goto start_fail;
+
ret = nbl_intr_mgt_start(res_mgt);
if (ret)
goto start_fail;
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
index 2195e4084040..27a9900ff009 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
@@ -426,6 +426,10 @@ int nbl_intr_setup_ops(struct nbl_resource_ops *resource_ops);
int nbl_queue_mgt_start(struct nbl_resource_mgt *res_mgt);
void nbl_queue_mgt_stop(struct nbl_resource_mgt *res_mgt);
+int nbl_vsi_mgt_start(struct nbl_resource_mgt *res_mgt);
+void nbl_vsi_mgt_stop(struct nbl_resource_mgt *res_mgt);
+int nbl_vsi_setup_ops(struct nbl_resource_ops *resource_ops);
+
void nbl_res_pf_dev_vsi_type_to_hw_vsi_type(u16 src_type,
enum nbl_vsi_serv_type *dst_type);
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c
new file mode 100644
index 000000000000..068058a90363
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c
@@ -0,0 +1,120 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+#include <linux/etherdevice.h>
+
+#include "nbl_vsi.h"
+
+static int nbl_res_set_promisc_mode(void *priv, u16 vsi_id, u16 mode)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ u16 pf_id = nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id);
+ u16 eth_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi_id);
+
+ if (pf_id >= NBL_RES_MGT_TO_PF_NUM(res_mgt))
+ return -EINVAL;
+
+ hw_ops->set_promisc_mode(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id,
+ eth_id, mode);
+
+ return 0;
+}
+
+static void nbl_res_vsi_deinit_chip_module(void *priv)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_hw_ops *hw_ops;
+
+ hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+
+ hw_ops->deinit_chip_module(NBL_RES_MGT_TO_HW_PRIV(res_mgt));
+}
+
+static int nbl_res_vsi_init_chip_module(void *priv)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt);
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ u8 eth_speed = res_mgt->resource_info->board_info.eth_speed;
+ u8 eth_num = res_mgt->resource_info->board_info.eth_num;
+
+ int ret = 0;
+
+ if (!res_mgt)
+ return -EINVAL;
+
+ hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ ret = hw_ops->init_chip_module(p, eth_speed, eth_num);
+
+ return ret;
+}
+
+/* NBL_VSI_SET_OPS(ops_name, func)
+ *
+ * Use X Macros to reduce setup and remove codes.
+ */
+#define NBL_VSI_OPS_TBL \
+do { \
+ NBL_VSI_SET_OPS(init_chip_module, \
+ nbl_res_vsi_init_chip_module); \
+ NBL_VSI_SET_OPS(deinit_chip_module, \
+ nbl_res_vsi_deinit_chip_module); \
+ NBL_VSI_SET_OPS(set_promisc_mode, nbl_res_set_promisc_mode); \
+} while (0)
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_vsi_setup_mgt(struct device *dev, struct nbl_vsi_mgt **vsi_mgt)
+{
+ *vsi_mgt = devm_kzalloc(dev, sizeof(struct nbl_vsi_mgt), GFP_KERNEL);
+ if (!*vsi_mgt)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void nbl_vsi_remove_mgt(struct device *dev, struct nbl_vsi_mgt **vsi_mgt)
+{
+ devm_kfree(dev, *vsi_mgt);
+ *vsi_mgt = NULL;
+}
+
+int nbl_vsi_mgt_start(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev;
+ struct nbl_vsi_mgt **vsi_mgt;
+
+ dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ vsi_mgt = &NBL_RES_MGT_TO_VSI_MGT(res_mgt);
+
+ return nbl_vsi_setup_mgt(dev, vsi_mgt);
+}
+
+void nbl_vsi_mgt_stop(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev;
+ struct nbl_vsi_mgt **vsi_mgt;
+
+ dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ vsi_mgt = &NBL_RES_MGT_TO_VSI_MGT(res_mgt);
+
+ if (!(*vsi_mgt))
+ return;
+
+ nbl_vsi_remove_mgt(dev, vsi_mgt);
+}
+
+int nbl_vsi_setup_ops(struct nbl_resource_ops *res_ops)
+{
+#define NBL_VSI_SET_OPS(name, func) \
+ do { \
+ res_ops->NBL_NAME(name) = func; \
+ ; \
+ } while (0)
+ NBL_VSI_OPS_TBL;
+#undef NBL_VSI_SET_OPS
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h
new file mode 100644
index 000000000000..94831e00b89a
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_VSI_H_
+#define _NBL_VSI_H_
+
+#include "nbl_resource.h"
+
+#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 09/15] net/nebula-matrix: add txrx resource interfaces and tx func
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (7 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 08/15] net/nebula-matrix: add vsi " illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 10/15] net/nebula-matrix: add txrx interrupt handling logic illusion.wang
` (6 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
primarily tx function and ring management
1.Ring Management for Transmit and Receive:
Alloc tx rings (nbl_alloc_tx_rings) and rx rings (nbl_alloc_rx_rings).
Free tx rings (nbl_free_tx_rings) and rx rings (nbl_free_rx_rings).
Start tx rings (nbl_res_txrx_start_tx_ring) and rx rings
(nbl_res_txrx_start_rx_ring).
Stop tx rings (nbl_res_txrx_stop_tx_ring) and rx rings
(nbl_res_txrx_stop_rx_ring).
2.Packet Handling:
Implement packet transmission (nbl_res_txrx_start_xmit), including
handling TSO (TCP Segmentation Offload) and checksum offloading.
Manage transmit buffers (nbl_unmap_and_free_tx_resource) and
receive buffers (nbl_put_rx_frag).
Support multi-queue transmission and reception.
3.Interrupt and NAPI Management:
Provide an interface to get the NAPI structure
(nbl_res_txrx_get_vector_napi).
Set interrupt information (nbl_res_txrx_set_vector_info).
Trigger receive rings (nbl_res_txrx_kick_rx_ring).
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/Makefile | 1 +
.../net/ethernet/nebula-matrix/nbl/nbl_core.h | 14 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 22 +-
.../nbl_hw_leonis/nbl_resource_leonis.c | 12 +
.../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 4 +
.../nebula-matrix/nbl/nbl_hw/nbl_txrx.c | 1613 +++++++++++++++++
.../nebula-matrix/nbl/nbl_hw/nbl_txrx.h | 184 ++
.../nbl/nbl_include/nbl_def_hw.h | 3 +
.../nbl/nbl_include/nbl_def_resource.h | 22 +
.../nbl/nbl_include/nbl_include.h | 15 +
10 files changed, 1889 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
index f2a68245944a..a7254873f02e 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile
+++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile
@@ -12,6 +12,7 @@ nbl_core-objs += nbl_common/nbl_common.o \
nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \
nbl_hw/nbl_resource.o \
nbl_hw/nbl_interrupt.o \
+ nbl_hw/nbl_txrx.o \
nbl_hw/nbl_queue.o \
nbl_hw/nbl_vsi.o \
nbl_core/nbl_dispatch.o \
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
index 6d4b890a4d84..e39319cee159 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
@@ -38,6 +38,11 @@
#define NBL_ADAPTER_TO_RES_PT_OPS(adapter) \
(&(NBL_ADAP_TO_SERV_OPS_TBL(adapter)->pt_ops))
+#define NBL_NETDEV_PRIV_TO_ADAPTER(priv) ((priv)->adapter)
+
+#define NBL_NETDEV_TO_ADAPTER(netdev) \
+ (NBL_NETDEV_PRIV_TO_ADAPTER( \
+ (struct nbl_netdev_priv *)netdev_priv(netdev)))
#define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1)
#define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT)
@@ -81,6 +86,15 @@ struct nbl_adapter {
struct nbl_init_param init_param;
};
+struct nbl_netdev_priv {
+ struct nbl_adapter *adapter;
+ struct net_device *netdev;
+ u16 tx_queue_num;
+ u16 rx_queue_num;
+ u16 queue_size;
+ u16 data_vsi;
+};
+
struct nbl_adapter *nbl_core_init(struct pci_dev *pdev,
struct nbl_init_param *param);
void nbl_core_remove(struct nbl_adapter *adapter);
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
index 43bb9d4af67a..7c51f4ee8715 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
@@ -1581,6 +1581,23 @@ static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus,
(u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map));
}
+static void nbl_hw_update_tail_ptr(void *priv, struct nbl_notify_param *param)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+ u8 __iomem *notify_addr = hw_mgt->hw_addr;
+ u32 local_qid = param->notify_qid;
+ u32 tail_ptr = param->tail_ptr;
+
+ writel((((u32)tail_ptr << 16) | (u32)local_qid), notify_addr);
+}
+
+static u8 __iomem *nbl_hw_get_tail_ptr(void *priv)
+{
+ struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv;
+
+ return hw_mgt->hw_addr;
+}
+
static void nbl_hw_set_promisc_mode(void *priv, u16 vsi_id, u16 eth_id,
u16 mode)
{
@@ -1668,6 +1685,7 @@ static struct nbl_hw_ops hw_ops = {
.save_uvn_ctx = nbl_hw_save_uvn_ctx,
.setup_queue_switch = nbl_hw_setup_queue_switch,
.init_pfc = nbl_hw_init_pfc,
+
.configure_msix_map = nbl_hw_configure_msix_map,
.configure_msix_info = nbl_hw_configure_msix_info,
@@ -1687,11 +1705,13 @@ static struct nbl_hw_ops hw_ops = {
.enable_msix_irq = nbl_hw_enable_msix_irq,
.get_msix_irq_enable_info = nbl_hw_get_msix_irq_enable_info,
+ .update_tail_ptr = nbl_hw_update_tail_ptr,
+ .get_tail_ptr = nbl_hw_get_tail_ptr,
+
.get_fw_eth_num = nbl_hw_get_fw_eth_num,
.get_fw_eth_map = nbl_hw_get_fw_eth_map,
.get_board_info = nbl_hw_get_board_info,
.get_quirks = nbl_hw_get_quirks,
-
};
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
index 83f8bb280b93..67cd140b12ca 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
@@ -236,6 +236,11 @@ static int nbl_res_setup_ops(struct device *dev,
ret = nbl_queue_setup_ops_leonis(&res_ops);
if (ret)
goto setup_fail;
+
+ ret = nbl_txrx_setup_ops(&res_ops);
+ if (ret)
+ goto setup_fail;
+
ret = nbl_intr_setup_ops(&res_ops);
if (ret)
goto setup_fail;
@@ -442,6 +447,7 @@ static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis)
struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt;
nbl_queue_mgt_stop(res_mgt);
+ nbl_txrx_mgt_stop(res_mgt);
nbl_intr_mgt_stop(res_mgt);
nbl_vsi_mgt_stop(res_mgt);
@@ -497,6 +503,12 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis,
if (ret)
goto start_fail;
}
+
+ if (caps.has_net) {
+ ret = nbl_txrx_mgt_start(res_mgt);
+ if (ret)
+ goto start_fail;
+ }
return 0;
start_fail:
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
index 27a9900ff009..c508468d6de8 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
@@ -426,6 +426,10 @@ int nbl_intr_setup_ops(struct nbl_resource_ops *resource_ops);
int nbl_queue_mgt_start(struct nbl_resource_mgt *res_mgt);
void nbl_queue_mgt_stop(struct nbl_resource_mgt *res_mgt);
+int nbl_txrx_mgt_start(struct nbl_resource_mgt *res_mgt);
+void nbl_txrx_mgt_stop(struct nbl_resource_mgt *res_mgt);
+int nbl_txrx_setup_ops(struct nbl_resource_ops *resource_ops);
+
int nbl_vsi_mgt_start(struct nbl_resource_mgt *res_mgt);
void nbl_vsi_mgt_stop(struct nbl_resource_mgt *res_mgt);
int nbl_vsi_setup_ops(struct nbl_resource_ops *resource_ops);
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c
new file mode 100644
index 000000000000..939166a074e2
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c
@@ -0,0 +1,1613 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+#include <linux/etherdevice.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <net/ipv6.h>
+#include <linux/sctp.h>
+#include <linux/if_vlan.h>
+#include <net/page_pool/helpers.h>
+
+#include "nbl_txrx.h"
+
+static bool nbl_txrx_within_vsi(struct nbl_txrx_vsi_info *vsi_info,
+ u16 ring_index)
+{
+ return ring_index >= vsi_info->ring_offset &&
+ ring_index < vsi_info->ring_offset + vsi_info->ring_num;
+}
+
+static struct netdev_queue *txring_txq(const struct nbl_res_tx_ring *ring)
+{
+ return netdev_get_tx_queue(ring->netdev, ring->queue_index);
+}
+
+static struct nbl_res_tx_ring *
+nbl_alloc_tx_ring(struct nbl_resource_mgt *res_mgt, struct net_device *netdev,
+ u16 ring_index, u16 desc_num)
+{
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct nbl_res_tx_ring *ring;
+
+ ring = devm_kzalloc(dev, sizeof(struct nbl_res_tx_ring), GFP_KERNEL);
+ if (!ring)
+ return NULL;
+
+ ring->vsi_info = txrx_mgt->vsi_info;
+ ring->dma_dev = common->dma_dev;
+ ring->product_type = common->product_type;
+ ring->eth_id = common->eth_id;
+ ring->queue_index = ring_index;
+ ring->notify_addr = (u8 __iomem *)
+ hw_ops->get_tail_ptr(NBL_RES_MGT_TO_HW_PRIV(res_mgt));
+ ring->notify_qid = NBL_RES_NOFITY_QID(res_mgt, ring_index * 2 + 1);
+ ring->netdev = netdev;
+ ring->desc_num = desc_num;
+ ring->used_wrap_counter = 1;
+ ring->avail_used_flags |= BIT(NBL_PACKED_DESC_F_AVAIL);
+
+ return ring;
+}
+
+static int nbl_alloc_tx_rings(struct nbl_resource_mgt *res_mgt,
+ struct net_device *netdev, u16 tx_num,
+ u16 desc_num)
+{
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct nbl_res_tx_ring *ring;
+ u32 ring_index;
+
+ if (txrx_mgt->tx_rings) {
+ netif_err(common, drv, netdev,
+ "Try to allocate tx_rings which already exists\n");
+ return -EINVAL;
+ }
+
+ txrx_mgt->tx_ring_num = tx_num;
+
+ txrx_mgt->tx_rings = devm_kcalloc(dev, tx_num,
+ sizeof(struct nbl_res_tx_ring *),
+ GFP_KERNEL);
+ if (!txrx_mgt->tx_rings)
+ return -ENOMEM;
+
+ for (ring_index = 0; ring_index < tx_num; ring_index++) {
+ ring = txrx_mgt->tx_rings[ring_index];
+ WARN_ON(ring);
+ ring = nbl_alloc_tx_ring(res_mgt, netdev, ring_index, desc_num);
+ if (!ring)
+ goto alloc_tx_ring_failed;
+
+ WRITE_ONCE(txrx_mgt->tx_rings[ring_index], ring);
+ }
+
+ return 0;
+
+alloc_tx_ring_failed:
+ while (ring_index--)
+ devm_kfree(dev, txrx_mgt->tx_rings[ring_index]);
+ devm_kfree(dev, txrx_mgt->tx_rings);
+ txrx_mgt->tx_rings = NULL;
+ return -ENOMEM;
+}
+
+static void nbl_free_tx_rings(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+ struct nbl_res_tx_ring *ring;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ u16 ring_count;
+ u16 ring_index;
+
+ ring_count = txrx_mgt->tx_ring_num;
+ for (ring_index = 0; ring_index < ring_count; ring_index++) {
+ ring = txrx_mgt->tx_rings[ring_index];
+ devm_kfree(dev, ring);
+ }
+ devm_kfree(dev, txrx_mgt->tx_rings);
+ txrx_mgt->tx_rings = NULL;
+}
+
+static int nbl_alloc_rx_rings(struct nbl_resource_mgt *res_mgt,
+ struct net_device *netdev, u16 rx_num,
+ u16 desc_num)
+{
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct nbl_res_rx_ring *ring;
+ u32 ring_index;
+
+ if (txrx_mgt->rx_rings) {
+ netif_err(common, drv, netdev,
+ "Try to allocate rx_rings which already exists\n");
+ return -EINVAL;
+ }
+
+ txrx_mgt->rx_ring_num = rx_num;
+
+ txrx_mgt->rx_rings = devm_kcalloc(dev, rx_num,
+ sizeof(struct nbl_res_rx_ring *),
+ GFP_KERNEL);
+ if (!txrx_mgt->rx_rings)
+ return -ENOMEM;
+
+ for (ring_index = 0; ring_index < rx_num; ring_index++) {
+ ring = txrx_mgt->rx_rings[ring_index];
+ WARN_ON(ring);
+ ring = devm_kzalloc(dev, sizeof(struct nbl_res_rx_ring),
+ GFP_KERNEL);
+ if (!ring)
+ goto alloc_rx_ring_failed;
+
+ ring->common = common;
+ ring->txrx_mgt = txrx_mgt;
+ ring->dma_dev = common->dma_dev;
+ ring->queue_index = ring_index;
+ ring->notify_qid = NBL_RES_NOFITY_QID(res_mgt, ring_index * 2);
+ ring->netdev = netdev;
+ ring->desc_num = desc_num;
+ /* RX buffer length is determined by mtu,
+ * when netdev up we will set buf_len according to its mtu
+ */
+ ring->buf_len = PAGE_SIZE / 2 - NBL_RX_PAD;
+
+ ring->used_wrap_counter = 1;
+ ring->avail_used_flags |= BIT(NBL_PACKED_DESC_F_AVAIL);
+ WRITE_ONCE(txrx_mgt->rx_rings[ring_index], ring);
+ }
+
+ return 0;
+
+alloc_rx_ring_failed:
+ while (ring_index--)
+ devm_kfree(dev, txrx_mgt->rx_rings[ring_index]);
+ devm_kfree(dev, txrx_mgt->rx_rings);
+ txrx_mgt->rx_rings = NULL;
+ return -ENOMEM;
+}
+
+static void nbl_free_rx_rings(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+ struct nbl_res_rx_ring *ring;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ u16 ring_count;
+ u16 ring_index;
+
+ ring_count = txrx_mgt->rx_ring_num;
+ for (ring_index = 0; ring_index < ring_count; ring_index++) {
+ ring = txrx_mgt->rx_rings[ring_index];
+ devm_kfree(dev, ring);
+ }
+ devm_kfree(dev, txrx_mgt->rx_rings);
+ txrx_mgt->rx_rings = NULL;
+}
+
+static int nbl_alloc_vectors(struct nbl_resource_mgt *res_mgt, u16 num)
+{
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct nbl_res_vector *vector;
+ u32 index;
+
+ if (txrx_mgt->vectors) {
+ nbl_err(common,
+ "Try to allocate vectors which already exists\n");
+ return -EINVAL;
+ }
+
+ txrx_mgt->vectors = devm_kcalloc(dev, num,
+ sizeof(struct nbl_res_vector *),
+ GFP_KERNEL);
+ if (!txrx_mgt->vectors)
+ return -ENOMEM;
+
+ for (index = 0; index < num; index++) {
+ vector = txrx_mgt->vectors[index];
+ WARN_ON(vector);
+ vector = devm_kzalloc(dev, sizeof(struct nbl_res_vector),
+ GFP_KERNEL);
+ if (!vector)
+ goto alloc_vector_failed;
+
+ vector->rx_ring = txrx_mgt->rx_rings[index];
+ vector->tx_ring = txrx_mgt->tx_rings[index];
+ WRITE_ONCE(txrx_mgt->vectors[index], vector);
+ }
+ return 0;
+
+alloc_vector_failed:
+ while (index--)
+ devm_kfree(dev, txrx_mgt->vectors[index]);
+ devm_kfree(dev, txrx_mgt->vectors);
+ txrx_mgt->vectors = NULL;
+ return -ENOMEM;
+}
+
+static void nbl_free_vectors(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+ struct nbl_res_vector *vector;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ u16 count, index;
+
+ count = txrx_mgt->rx_ring_num;
+ for (index = 0; index < count; index++) {
+ vector = txrx_mgt->vectors[index];
+ devm_kfree(dev, vector);
+ }
+ devm_kfree(dev, txrx_mgt->vectors);
+ txrx_mgt->vectors = NULL;
+}
+
+static int nbl_res_txrx_alloc_rings(void *priv, struct net_device *netdev,
+ struct nbl_ring_param *param)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ int err = 0;
+
+ err = nbl_alloc_tx_rings(res_mgt, netdev, param->tx_ring_num,
+ param->queue_size);
+ if (err)
+ return err;
+
+ err = nbl_alloc_rx_rings(res_mgt, netdev, param->rx_ring_num,
+ param->queue_size);
+ if (err)
+ goto alloc_rx_rings_err;
+
+ err = nbl_alloc_vectors(res_mgt, param->rx_ring_num);
+ if (err)
+ goto alloc_vectors_err;
+
+ nbl_info(res_mgt->common, "Alloc rings for %d tx, %d rx, %d desc\n",
+ param->tx_ring_num, param->rx_ring_num, param->queue_size);
+ return 0;
+
+alloc_vectors_err:
+ nbl_free_rx_rings(res_mgt);
+alloc_rx_rings_err:
+ nbl_free_tx_rings(res_mgt);
+ return err;
+}
+
+static void nbl_res_txrx_remove_rings(void *priv)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+
+ nbl_free_vectors(res_mgt);
+ nbl_free_tx_rings(res_mgt);
+ nbl_free_rx_rings(res_mgt);
+ nbl_debug(res_mgt->common, "Remove rings");
+}
+
+static dma_addr_t nbl_res_txrx_start_tx_ring(void *priv, u8 ring_index)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct device *dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt);
+ struct nbl_res_tx_ring *tx_ring =
+ NBL_RES_MGT_TO_TX_RING(res_mgt, ring_index);
+
+ if (tx_ring->tx_bufs) {
+ nbl_err(res_mgt->common,
+ "Try to setup a TX ring with buffer management array already allocated\n");
+ return (dma_addr_t)NULL;
+ }
+
+ tx_ring->tx_bufs = devm_kcalloc(dev, tx_ring->desc_num,
+ sizeof(*tx_ring->tx_bufs), GFP_KERNEL);
+ if (!tx_ring->tx_bufs)
+ return (dma_addr_t)NULL;
+
+ /* Alloc twice memory, and second half is used to back up the desc
+ *for desc checking
+ */
+ tx_ring->size = ALIGN(tx_ring->desc_num * sizeof(struct nbl_ring_desc),
+ PAGE_SIZE);
+ tx_ring->desc = dmam_alloc_coherent(dma_dev, tx_ring->size,
+ &tx_ring->dma,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!tx_ring->desc)
+ goto alloc_dma_err;
+
+ tx_ring->next_to_use = 0;
+ tx_ring->next_to_clean = 0;
+ tx_ring->tail_ptr = 0;
+
+ tx_ring->valid = true;
+ nbl_debug(res_mgt->common, "Start tx ring %d", ring_index);
+ return tx_ring->dma;
+
+alloc_dma_err:
+ devm_kfree(dev, tx_ring->tx_bufs);
+ tx_ring->tx_bufs = NULL;
+ tx_ring->size = 0;
+ return (dma_addr_t)NULL;
+}
+
+static __always_inline bool nbl_rx_cache_get(struct nbl_res_rx_ring *rx_ring,
+ struct nbl_dma_info *dma_info)
+{
+ struct nbl_page_cache *cache = &rx_ring->page_cache;
+ struct nbl_rx_queue_stats *stats = &rx_ring->rx_stats;
+
+ if (unlikely(cache->head == cache->tail)) {
+ stats->rx_cache_empty++;
+ return false;
+ }
+
+ if (page_ref_count(cache->page_cache[cache->head].page) != 1) {
+ stats->rx_cache_busy++;
+ return false;
+ }
+
+ *dma_info = cache->page_cache[cache->head];
+ cache->head = (cache->head + 1) & (NBL_MAX_CACHE_SIZE - 1);
+ stats->rx_cache_reuse++;
+
+ dma_sync_single_for_device(rx_ring->dma_dev, dma_info->addr,
+ dma_info->size, DMA_FROM_DEVICE);
+ return true;
+}
+
+static __always_inline int nbl_page_alloc_pool(struct nbl_res_rx_ring *rx_ring,
+ struct nbl_dma_info *dma_info)
+{
+ if (nbl_rx_cache_get(rx_ring, dma_info))
+ return 0;
+
+ dma_info->page = page_pool_dev_alloc_pages(rx_ring->page_pool);
+ if (unlikely(!dma_info->page))
+ return -ENOMEM;
+
+ dma_info->addr = dma_map_page_attrs(rx_ring->dma_dev, dma_info->page, 0,
+ dma_info->size, DMA_FROM_DEVICE,
+ NBL_RX_DMA_ATTR);
+
+ if (unlikely(dma_mapping_error(rx_ring->dma_dev, dma_info->addr))) {
+ page_pool_recycle_direct(rx_ring->page_pool, dma_info->page);
+ dma_info->page = NULL;
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static __always_inline int nbl_get_rx_frag(struct nbl_res_rx_ring *rx_ring,
+ struct nbl_rx_buffer *buffer)
+{
+ int err = 0;
+
+ /* first buffer alloc page */
+ if (buffer->first_in_page)
+ err = nbl_page_alloc_pool(rx_ring, buffer->di);
+
+ return err;
+}
+
+static __always_inline bool nbl_alloc_rx_bufs(struct nbl_res_rx_ring *rx_ring,
+ u16 count)
+{
+ u32 buf_len;
+ u16 next_to_use, head;
+ __le16 head_flags = 0;
+ struct nbl_ring_desc *rx_desc, *head_desc;
+ struct nbl_rx_buffer *rx_buf;
+ int i;
+
+ if (unlikely(!rx_ring || !count)) {
+ nbl_warn(NBL_RING_TO_COMMON(rx_ring),
+ "invalid input parameters, rx_ring is %p, count is %d.\n",
+ rx_ring, count);
+ return -EINVAL;
+ }
+
+ buf_len = rx_ring->buf_len;
+ next_to_use = rx_ring->next_to_use;
+
+ head = next_to_use;
+ head_desc = NBL_RX_DESC(rx_ring, next_to_use);
+ rx_desc = NBL_RX_DESC(rx_ring, next_to_use);
+ rx_buf = NBL_RX_BUF(rx_ring, next_to_use);
+
+ if (unlikely(!rx_desc || !rx_buf)) {
+ nbl_warn(NBL_RING_TO_COMMON(rx_ring),
+ "invalid input parameters, next_to_use:%d, rx_desc is %p, rx_buf is %p.\n",
+ next_to_use, rx_desc, rx_buf);
+ return -EINVAL;
+ }
+
+ do {
+ if (nbl_get_rx_frag(rx_ring, rx_buf))
+ break;
+
+ for (i = 0; i < rx_ring->frags_num_per_page;
+ i++, rx_desc++, rx_buf++) {
+ rx_desc->addr =
+ cpu_to_le64(rx_buf->di->addr + rx_buf->offset);
+ rx_desc->len = cpu_to_le32(buf_len);
+ rx_desc->id = cpu_to_le16(next_to_use);
+
+ if (likely(head != next_to_use || i))
+ rx_desc->flags =
+ cpu_to_le16(rx_ring->avail_used_flags |
+ NBL_PACKED_DESC_F_WRITE);
+ else
+ head_flags =
+ cpu_to_le16(rx_ring->avail_used_flags |
+ NBL_PACKED_DESC_F_WRITE);
+ }
+
+ next_to_use += rx_ring->frags_num_per_page;
+ rx_ring->tail_ptr += rx_ring->frags_num_per_page;
+ count -= rx_ring->frags_num_per_page;
+ if (next_to_use == rx_ring->desc_num) {
+ next_to_use = 0;
+ rx_desc = NBL_RX_DESC(rx_ring, next_to_use);
+ rx_buf = NBL_RX_BUF(rx_ring, next_to_use);
+ rx_ring->avail_used_flags ^=
+ BIT(NBL_PACKED_DESC_F_AVAIL) |
+ BIT(NBL_PACKED_DESC_F_USED);
+ }
+ } while (count);
+
+ if (next_to_use != head) {
+ /* wmb */
+ wmb();
+ head_desc->flags = head_flags;
+ rx_ring->next_to_use = next_to_use;
+ }
+
+ return !!count;
+}
+
+static void nbl_unmap_and_free_tx_resource(struct nbl_res_tx_ring *ring,
+ struct nbl_tx_buffer *tx_buffer,
+ bool free, bool in_napi)
+{
+ struct device *dma_dev = NBL_RING_TO_DMA_DEV(ring);
+
+ if (tx_buffer->skb) {
+ if (likely(free)) {
+ if (in_napi)
+ napi_consume_skb(tx_buffer->skb,
+ NBL_TX_POLL_WEIGHT);
+ else
+ dev_kfree_skb_any(tx_buffer->skb);
+ }
+
+ if (dma_unmap_len(tx_buffer, len))
+ dma_unmap_single(dma_dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ } else if (tx_buffer->page && dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(dma_dev, dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len), DMA_TO_DEVICE);
+ } else if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_single(dma_dev, dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len), DMA_TO_DEVICE);
+ }
+
+ tx_buffer->next_to_watch = NULL;
+ tx_buffer->skb = NULL;
+ tx_buffer->page = 0;
+ tx_buffer->bytecount = 0;
+ tx_buffer->gso_segs = 0;
+ dma_unmap_len_set(tx_buffer, len, 0);
+}
+
+static void nbl_free_tx_ring_bufs(struct nbl_res_tx_ring *tx_ring)
+{
+ struct nbl_tx_buffer *tx_buffer;
+ u16 i;
+
+ i = tx_ring->next_to_clean;
+ tx_buffer = NBL_TX_BUF(tx_ring, i);
+ while (i != tx_ring->next_to_use) {
+ nbl_unmap_and_free_tx_resource(tx_ring, tx_buffer, true, false);
+ i++;
+ tx_buffer++;
+ if (i == tx_ring->desc_num) {
+ i = 0;
+ tx_buffer = NBL_TX_BUF(tx_ring, i);
+ }
+ }
+
+ tx_ring->next_to_clean = 0;
+ tx_ring->next_to_use = 0;
+ tx_ring->tail_ptr = 0;
+
+ tx_ring->used_wrap_counter = 1;
+ tx_ring->avail_used_flags = BIT(NBL_PACKED_DESC_F_AVAIL);
+ memset(tx_ring->desc, 0, tx_ring->size);
+}
+
+static void nbl_res_txrx_stop_tx_ring(void *priv, u8 ring_index)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct device *dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt);
+ struct nbl_res_tx_ring *tx_ring =
+ NBL_RES_MGT_TO_TX_RING(res_mgt, ring_index);
+ struct nbl_res_vector *vector =
+ NBL_RES_MGT_TO_VECTOR(res_mgt, ring_index);
+
+ vector->started = false;
+ /* Flush napi task, to ensue the sched napi finish. So napi will no to
+ * access the ring memory(wild point), bacause the vector->started has
+ * set false.
+ */
+ napi_synchronize(&vector->nbl_napi.napi);
+ tx_ring->valid = false;
+
+ nbl_free_tx_ring_bufs(tx_ring);
+ WRITE_ONCE(NBL_RES_MGT_TO_TX_RING(res_mgt, ring_index), tx_ring);
+
+ devm_kfree(dev, tx_ring->tx_bufs);
+ tx_ring->tx_bufs = NULL;
+
+ dmam_free_coherent(dma_dev, tx_ring->size, tx_ring->desc, tx_ring->dma);
+ tx_ring->desc = NULL;
+ tx_ring->dma = (dma_addr_t)NULL;
+ tx_ring->size = 0;
+
+ if (nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA],
+ tx_ring->queue_index))
+ netdev_tx_reset_queue(txring_txq(tx_ring));
+
+ nbl_debug(res_mgt->common, "Stop tx ring %d", ring_index);
+}
+
+static __always_inline bool nbl_dev_page_is_reusable(struct page *page, u8 nid)
+{
+ return likely(page_to_nid(page) == nid && !page_is_pfmemalloc(page));
+}
+
+static __always_inline int nbl_rx_cache_put(struct nbl_res_rx_ring *rx_ring,
+ struct nbl_dma_info *dma_info)
+{
+ struct nbl_page_cache *cache = &rx_ring->page_cache;
+ u32 tail_next = (cache->tail + 1) & (NBL_MAX_CACHE_SIZE - 1);
+ struct nbl_rx_queue_stats *stats = &rx_ring->rx_stats;
+
+ if (tail_next == cache->head) {
+ stats->rx_cache_full++;
+ return 0;
+ }
+
+ if (!nbl_dev_page_is_reusable(dma_info->page, rx_ring->nid)) {
+ stats->rx_cache_waive++;
+ return 1;
+ }
+
+ cache->page_cache[cache->tail] = *dma_info;
+ cache->tail = tail_next;
+
+ return 2;
+}
+
+static __always_inline void
+nbl_page_release_dynamic(struct nbl_res_rx_ring *rx_ring,
+ struct nbl_dma_info *dma_info, bool recycle)
+{
+ u32 ret;
+
+ if (likely(recycle)) {
+ ret = nbl_rx_cache_put(rx_ring, dma_info);
+ if (ret == 2)
+ return;
+ if (ret == 1)
+ goto free_page;
+ dma_unmap_page_attrs(rx_ring->dma_dev, dma_info->addr,
+ dma_info->size, DMA_FROM_DEVICE,
+ NBL_RX_DMA_ATTR);
+ page_pool_recycle_direct(rx_ring->page_pool, dma_info->page);
+
+ return;
+ }
+free_page:
+ dma_unmap_page_attrs(rx_ring->dma_dev, dma_info->addr, dma_info->size,
+ DMA_FROM_DEVICE, NBL_RX_DMA_ATTR);
+ page_pool_put_page(rx_ring->page_pool, dma_info->page, dma_info->size,
+ true);
+}
+
+static __always_inline void nbl_put_rx_frag(struct nbl_res_rx_ring *rx_ring,
+ struct nbl_rx_buffer *buffer,
+ bool recycle)
+{
+ if (buffer->last_in_page)
+ nbl_page_release_dynamic(rx_ring, buffer->di, recycle);
+}
+
+static void nbl_free_rx_ring_bufs(struct nbl_res_rx_ring *rx_ring)
+{
+ struct nbl_rx_buffer *rx_buf;
+ u16 i;
+
+ i = rx_ring->next_to_clean;
+ rx_buf = NBL_RX_BUF(rx_ring, i);
+ while (i != rx_ring->next_to_use) {
+ nbl_put_rx_frag(rx_ring, rx_buf, false);
+ i++;
+ rx_buf++;
+ if (i == rx_ring->desc_num) {
+ i = 0;
+ rx_buf = NBL_RX_BUF(rx_ring, i);
+ }
+ }
+
+ for (i = rx_ring->page_cache.head; i != rx_ring->page_cache.tail;
+ i = (i + 1) & (NBL_MAX_CACHE_SIZE - 1)) {
+ struct nbl_dma_info *dma_info =
+ &rx_ring->page_cache.page_cache[i];
+
+ nbl_page_release_dynamic(rx_ring, dma_info, false);
+ }
+
+ rx_ring->next_to_clean = 0;
+ rx_ring->next_to_use = 0;
+ rx_ring->tail_ptr = 0;
+ rx_ring->page_cache.head = 0;
+ rx_ring->page_cache.tail = 0;
+
+ rx_ring->used_wrap_counter = 1;
+ rx_ring->avail_used_flags = BIT(NBL_PACKED_DESC_F_AVAIL);
+ memset(rx_ring->desc, 0, rx_ring->size);
+}
+
+static dma_addr_t nbl_res_txrx_start_rx_ring(void *priv, u8 ring_index,
+ bool use_napi)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct device *dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt);
+ struct nbl_res_rx_ring *rx_ring =
+ NBL_RES_MGT_TO_RX_RING(res_mgt, ring_index);
+ struct nbl_res_vector *vector =
+ NBL_RES_MGT_TO_VECTOR(res_mgt, ring_index);
+ struct page_pool_params pp_params = { 0 };
+ int pkt_len, hw_mtu, max_linear_len;
+ int buf_size;
+ int order = 0;
+ int i, j;
+ u16 rx_pad, tailroom;
+ size_t size;
+
+ if (rx_ring->rx_bufs) {
+ netif_err(common, drv, rx_ring->netdev,
+ "Try to setup a RX ring with buffer management array already allocated\n");
+ return (dma_addr_t)NULL;
+ }
+ hw_mtu = rx_ring->netdev->mtu + NBL_PKT_HDR_PAD + NBL_BUFFER_HDR_LEN;
+ tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ rx_pad = NBL_RX_PAD;
+ max_linear_len = NBL_RX_BUFSZ;
+ pkt_len = SKB_DATA_ALIGN(hw_mtu + rx_pad) + tailroom;
+ rx_ring->linear_skb = true;
+ if (pkt_len > max_linear_len) {
+ rx_ring->linear_skb = false;
+ rx_pad = 0;
+ tailroom = 0;
+ pkt_len = SKB_DATA_ALIGN(hw_mtu);
+ }
+ buf_size = NBL_RX_BUFSZ;
+ WARN_ON(buf_size > PAGE_SIZE);
+ rx_ring->frags_num_per_page = (PAGE_SIZE * (1 << order)) / buf_size;
+ WARN_ON(rx_ring->frags_num_per_page > NBL_MAX_BATCH_DESC);
+ rx_ring->buf_len = buf_size - rx_pad - tailroom;
+
+ pp_params.order = order;
+ pp_params.flags = 0;
+ pp_params.pool_size = rx_ring->desc_num;
+ pp_params.nid = dev_to_node(dev);
+ pp_params.dev = dev;
+ pp_params.dma_dir = DMA_FROM_DEVICE;
+
+ if (dev_to_node(dev) == NUMA_NO_NODE)
+ rx_ring->nid = 0;
+ else
+ rx_ring->nid = dev_to_node(dev);
+
+ rx_ring->page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(rx_ring->page_pool)) {
+ netif_err(common, drv, rx_ring->netdev,
+ "Page_pool Allocate %u Failed failed\n",
+ rx_ring->queue_index);
+ return (dma_addr_t)NULL;
+ }
+ size = array_size(rx_ring->desc_num / rx_ring->frags_num_per_page,
+ sizeof(struct nbl_dma_info));
+ rx_ring->di = kvzalloc_node(size, GFP_KERNEL, dev_to_node(dev));
+ if (!rx_ring->di) {
+ netif_err(common, drv, rx_ring->netdev,
+ "Dma info Allocate %u Failed failed\n",
+ rx_ring->queue_index);
+ goto alloc_di_err;
+ }
+
+ rx_ring->rx_bufs = devm_kcalloc(dev, rx_ring->desc_num,
+ sizeof(*rx_ring->rx_bufs), GFP_KERNEL);
+ if (!rx_ring->rx_bufs)
+ goto alloc_buffers_err;
+
+ /* Alloc twice memory, and second half is used to back up the desc
+ * for desc checking
+ */
+ rx_ring->size = ALIGN(rx_ring->desc_num * sizeof(struct nbl_ring_desc),
+ PAGE_SIZE);
+ rx_ring->desc = dmam_alloc_coherent(dma_dev, rx_ring->size,
+ &rx_ring->dma,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!rx_ring->desc) {
+ netif_err(common, drv, rx_ring->netdev,
+ "Allocate %u bytes descriptor DMA memory for RX queue %u failed\n",
+ rx_ring->size, rx_ring->queue_index);
+ goto alloc_dma_err;
+ }
+
+ rx_ring->next_to_use = 0;
+ rx_ring->next_to_clean = 0;
+ rx_ring->tail_ptr = 0;
+
+ j = 0;
+ for (i = 0; i < rx_ring->desc_num / rx_ring->frags_num_per_page; i++) {
+ struct nbl_dma_info *di = &rx_ring->di[i];
+ struct nbl_rx_buffer *buffer = &rx_ring->rx_bufs[j];
+ int f;
+
+ di->size = (PAGE_SIZE * (1 << order));
+ for (f = 0; f < rx_ring->frags_num_per_page; f++, j++) {
+ buffer = &rx_ring->rx_bufs[j];
+ buffer->di = di;
+ buffer->size = buf_size;
+ buffer->offset = rx_pad + f * buf_size;
+ buffer->rx_pad = rx_pad;
+ buffer->first_in_page = (f == 0);
+ buffer->last_in_page =
+ (f == rx_ring->frags_num_per_page - 1);
+ }
+ }
+
+ if (nbl_alloc_rx_bufs(rx_ring, rx_ring->desc_num - NBL_MAX_BATCH_DESC))
+ goto alloc_rx_bufs_err;
+
+ rx_ring->valid = true;
+ if (use_napi && vector)
+ vector->started = true;
+
+ netif_dbg(common, drv, rx_ring->netdev, "Start rx ring %d", ring_index);
+ return rx_ring->dma;
+
+alloc_rx_bufs_err:
+ nbl_free_rx_ring_bufs(rx_ring);
+ dmam_free_coherent(dma_dev, rx_ring->size, rx_ring->desc, rx_ring->dma);
+ rx_ring->desc = NULL;
+ rx_ring->dma = (dma_addr_t)NULL;
+alloc_dma_err:
+ devm_kfree(dev, rx_ring->rx_bufs);
+ rx_ring->rx_bufs = NULL;
+alloc_buffers_err:
+ kvfree(rx_ring->di);
+alloc_di_err:
+ page_pool_destroy(rx_ring->page_pool);
+ rx_ring->size = 0;
+ return (dma_addr_t)NULL;
+}
+
+static void nbl_res_txrx_stop_rx_ring(void *priv, u8 ring_index)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ struct device *dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt);
+ struct nbl_res_rx_ring *rx_ring =
+ NBL_RES_MGT_TO_RX_RING(res_mgt, ring_index);
+
+ rx_ring->valid = false;
+
+ nbl_free_rx_ring_bufs(rx_ring);
+ WRITE_ONCE(NBL_RES_MGT_TO_RX_RING(res_mgt, ring_index), rx_ring);
+
+ devm_kfree(dev, rx_ring->rx_bufs);
+ kvfree(rx_ring->di);
+ rx_ring->rx_bufs = NULL;
+
+ dmam_free_coherent(dma_dev, rx_ring->size, rx_ring->desc, rx_ring->dma);
+ rx_ring->desc = NULL;
+ rx_ring->dma = (dma_addr_t)NULL;
+ rx_ring->size = 0;
+
+ page_pool_destroy(rx_ring->page_pool);
+
+ nbl_debug(res_mgt->common, "Stop rx ring %d", ring_index);
+}
+
+static void nbl_txrx_register_vsi_ring(void *priv, u16 vsi_index,
+ u16 ring_offset, u16 ring_num)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+
+ txrx_mgt->vsi_info[vsi_index].ring_offset = ring_offset;
+ txrx_mgt->vsi_info[vsi_index].ring_num = ring_num;
+}
+
+static __always_inline int nbl_maybe_stop_tx(struct nbl_res_tx_ring *tx_ring,
+ unsigned int size)
+{
+ if (likely(nbl_unused_tx_desc_count(tx_ring) >= size))
+ return 0;
+
+ if (!nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA],
+ tx_ring->queue_index))
+ return -EBUSY;
+
+ dev_dbg(NBL_RING_TO_DEV(tx_ring),
+ "unused_desc_count:%u, size:%u, stop queue %u\n",
+ nbl_unused_tx_desc_count(tx_ring), size, tx_ring->queue_index);
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ /* smp_mb */
+ smp_mb();
+
+ if (likely(nbl_unused_tx_desc_count(tx_ring) < size))
+ return -EBUSY;
+
+ dev_dbg(NBL_RING_TO_DEV(tx_ring),
+ "unused_desc_count:%u, size:%u, start queue %u\n",
+ nbl_unused_tx_desc_count(tx_ring), size, tx_ring->queue_index);
+ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ return 0;
+}
+
+static unsigned int nbl_xmit_desc_count(struct sk_buff *skb)
+{
+ unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
+
+ return nr_frags + 1;
+}
+
+/* set up TSO(TCP Segmentation Offload) */
+static int nbl_tx_tso(struct nbl_tx_buffer *first,
+ struct nbl_tx_hdr_param *hdr_param)
+{
+ struct sk_buff *skb = first->skb;
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ struct udphdr *udp;
+ unsigned char *hdr;
+ } l4;
+ u8 l4_start;
+ u32 payload_len;
+ u8 header_len = 0;
+ int err;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 1;
+
+ if (!skb_is_gso(skb))
+ return 1;
+
+ err = skb_cow_head(skb, 0);
+ if (err < 0)
+ return err;
+
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+
+ /* initialize IP header fields*/
+ if (ip.v4->version == IP_VERSION_V4) {
+ ip.v4->tot_len = 0;
+ ip.v4->check = 0;
+ } else {
+ ip.v6->payload_len = 0;
+ }
+
+ /* length of (MAC + IP) header */
+ l4_start = (u8)(l4.hdr - skb->data);
+
+ /* l4 packet length */
+ payload_len = skb->len - l4_start;
+
+ /* remove l4 packet length from L4 pseudo-header checksum */
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
+ csum_replace_by_diff(&l4.udp->check,
+ (__force __wsum)htonl(payload_len));
+ /* compute length of UDP segmentation header */
+ header_len = (u8)sizeof(l4.udp) + l4_start;
+ } else {
+ csum_replace_by_diff(&l4.tcp->check,
+ (__force __wsum)htonl(payload_len));
+ /* compute length of TCP segmentation header */
+ header_len = (u8)(l4.tcp->doff * 4 + l4_start);
+ }
+
+ hdr_param->tso = 1;
+ hdr_param->mss = skb_shinfo(skb)->gso_size;
+ hdr_param->total_hlen = header_len;
+
+ first->gso_segs = skb_shinfo(skb)->gso_segs;
+ first->bytecount += (first->gso_segs - 1) * header_len;
+ first->tx_flags = NBL_TX_FLAGS_TSO;
+
+ return first->gso_segs;
+}
+
+/* set up Tx checksum offload */
+static int nbl_tx_csum(struct nbl_tx_buffer *first,
+ struct nbl_tx_hdr_param *hdr_param)
+{
+ struct sk_buff *skb = first->skb;
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ struct udphdr *udp;
+ unsigned char *hdr;
+ } l4;
+ __be16 frag_off, protocol;
+ u8 inner_ip_type = 0, l4_type = 0, l4_csum = 0, l4_proto = 0;
+ u32 l2_len = 0, l3_len = 0, l4_len = 0;
+ unsigned char *exthdr;
+ int ret;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+
+ /* compute outer L2 header size */
+ l2_len = ip.hdr - skb->data;
+
+ protocol = vlan_get_protocol(skb);
+
+ if (protocol == htons(ETH_P_IP)) {
+ inner_ip_type = NBL_TX_IIPT_IPV4;
+ l4_proto = ip.v4->protocol;
+ } else if (protocol == htons(ETH_P_IPV6)) {
+ inner_ip_type = NBL_TX_IIPT_IPV6;
+ exthdr = ip.hdr + sizeof(*ip.v6);
+ l4_proto = ip.v6->nexthdr;
+
+ if (l4.hdr != exthdr) {
+ ret = ipv6_skip_exthdr(skb, exthdr - skb->data,
+ &l4_proto, &frag_off);
+ if (ret < 0)
+ return -1;
+ }
+ } else {
+ return -1;
+ }
+
+ l3_len = l4.hdr - ip.hdr;
+
+ switch (l4_proto) {
+ case IPPROTO_TCP:
+ l4_type = NBL_TX_L4T_TCP;
+ l4_len = l4.tcp->doff;
+ l4_csum = 1;
+ break;
+ case IPPROTO_UDP:
+ l4_type = NBL_TX_L4T_UDP;
+ l4_len = (sizeof(struct udphdr) >> 2);
+ l4_csum = 1;
+ break;
+ case IPPROTO_SCTP:
+ if (first->tx_flags & NBL_TX_FLAGS_TSO)
+ return -1;
+ l4_type = NBL_TX_L4T_RSV;
+ l4_len = (sizeof(struct sctphdr) >> 2);
+ l4_csum = 1;
+ break;
+ default:
+ if (first->tx_flags & NBL_TX_FLAGS_TSO)
+ return -2;
+
+ /* unsopported L4 protocol, device cannot offload L4 checksum,
+ * so software compute L4 checskum
+ */
+ skb_checksum_help(skb);
+ return 0;
+ }
+
+ hdr_param->mac_len = l2_len >> 1;
+ hdr_param->ip_len = l3_len >> 2;
+ hdr_param->l4_len = l4_len;
+ hdr_param->l4_type = l4_type;
+ hdr_param->inner_ip_type = inner_ip_type;
+ hdr_param->l3_csum_en = 0;
+ hdr_param->l4_csum_en = l4_csum;
+
+ return 1;
+}
+
+static __always_inline int nbl_tx_fill_desc(struct nbl_res_tx_ring *tx_ring,
+ u64 dma, u32 size, u16 index,
+ bool first, bool page)
+{
+ struct nbl_tx_buffer *tx_buffer = NBL_TX_BUF(tx_ring, index);
+ struct nbl_ring_desc *tx_desc = NBL_TX_DESC(tx_ring, index);
+
+ tx_buffer->dma = dma;
+ tx_buffer->len = size;
+ tx_buffer->page = page;
+ tx_desc->addr = cpu_to_le64(dma);
+ tx_desc->len = cpu_to_le32(size);
+ if (!first)
+ tx_desc->flags = cpu_to_le16(tx_ring->avail_used_flags |
+ NBL_PACKED_DESC_F_NEXT);
+
+ index++;
+ if (index == tx_ring->desc_num) {
+ index = 0;
+ tx_ring->avail_used_flags ^= 1 << NBL_PACKED_DESC_F_AVAIL |
+ 1 << NBL_PACKED_DESC_F_USED;
+ }
+
+ return index;
+}
+
+static int nbl_map_skb(struct nbl_res_tx_ring *tx_ring, struct sk_buff *skb,
+ u16 first, u16 *desc_index)
+{
+ u16 index = *desc_index;
+ const skb_frag_t *frag;
+ unsigned int frag_num = skb_shinfo(skb)->nr_frags;
+ struct device *dma_dev = NBL_RING_TO_DMA_DEV(tx_ring);
+ unsigned int i;
+ unsigned int size;
+ dma_addr_t dma;
+
+ size = skb_headlen(skb);
+ dma = dma_map_single(dma_dev, skb->data, size, DMA_TO_DEVICE);
+ if (dma_mapping_error(dma_dev, dma))
+ return -1;
+
+ index = nbl_tx_fill_desc(tx_ring, dma, size, index, first, 0);
+
+ if (!frag_num) {
+ *desc_index = index;
+ return 0;
+ }
+
+ frag = &skb_shinfo(skb)->frags[0];
+ for (i = 0; i < frag_num; i++) {
+ size = skb_frag_size(frag);
+ dma = skb_frag_dma_map(dma_dev, frag, 0, size, DMA_TO_DEVICE);
+ if (dma_mapping_error(dma_dev, dma)) {
+ *desc_index = index;
+ return -1;
+ }
+
+ index = nbl_tx_fill_desc(tx_ring, dma, size, index, 0, 1);
+ frag++;
+ }
+
+ *desc_index = index;
+ return 0;
+}
+
+static __always_inline void
+nbl_tx_fill_tx_extend_header_leonis(union nbl_tx_extend_head *pkthdr,
+ struct nbl_tx_hdr_param *param)
+{
+ pkthdr->mac_len = param->mac_len;
+ pkthdr->ip_len = param->ip_len;
+ pkthdr->l4_len = param->l4_len;
+ pkthdr->l4_type = param->l4_type;
+ pkthdr->inner_ip_type = param->inner_ip_type;
+
+ pkthdr->l4s_sid = param->l4s_sid;
+ pkthdr->l4s_sync_ind = param->l4s_sync_ind;
+ pkthdr->l4s_hdl_ind = param->l4s_hdl_ind;
+ pkthdr->l4s_pbrac_mode = param->l4s_pbrac_mode;
+
+ pkthdr->mss = param->mss;
+ pkthdr->tso = param->tso;
+
+ pkthdr->fwd = param->fwd;
+ pkthdr->rss_lag_en = param->rss_lag_en;
+ pkthdr->dport = param->dport;
+ pkthdr->dport_id = param->dport_id;
+
+ pkthdr->l3_csum_en = param->l3_csum_en;
+ pkthdr->l4_csum_en = param->l4_csum_en;
+}
+
+static bool nbl_skb_is_lacp_or_lldp(struct sk_buff *skb)
+{
+ __be16 protocol;
+
+ protocol = vlan_get_protocol(skb);
+ if (protocol == htons(ETH_P_SLOW) || protocol == htons(ETH_P_LLDP))
+ return true;
+
+ return false;
+}
+
+static int nbl_tx_map(struct nbl_res_tx_ring *tx_ring, struct sk_buff *skb,
+ struct nbl_tx_hdr_param *hdr_param)
+{
+ struct device *dma_dev = NBL_RING_TO_DMA_DEV(tx_ring);
+ struct nbl_tx_buffer *first;
+ struct nbl_ring_desc *first_desc;
+ struct nbl_ring_desc *tx_desc;
+ union nbl_tx_extend_head *pkthdr;
+ dma_addr_t hdrdma;
+ int tso, csum;
+ u16 desc_index = tx_ring->next_to_use;
+ u16 tmp;
+ u16 head = desc_index;
+ u16 avail_used_flags = tx_ring->avail_used_flags;
+ u32 pkthdr_len, len;
+ bool can_push;
+ bool doorbell = true;
+
+ first_desc = NBL_TX_DESC(tx_ring, desc_index);
+ first = NBL_TX_BUF(tx_ring, desc_index);
+ first->gso_segs = 1;
+ first->bytecount = skb->len;
+ first->tx_flags = 0;
+ first->skb = skb;
+ skb_tx_timestamp(skb);
+
+ can_push = !skb_header_cloned(skb) &&
+ skb_headroom(skb) >= sizeof(*pkthdr);
+
+ if (can_push)
+ pkthdr = (union nbl_tx_extend_head *)(skb->data -
+ sizeof(*pkthdr));
+ else
+ pkthdr = (union nbl_tx_extend_head *)(skb->cb);
+
+ tso = nbl_tx_tso(first, hdr_param);
+ if (tso < 0) {
+ netdev_err(tx_ring->netdev, "tso ret:%d\n", tso);
+ goto out_drop;
+ }
+
+ csum = nbl_tx_csum(first, hdr_param);
+ if (csum < 0) {
+ netdev_err(tx_ring->netdev, "csum ret:%d\n", csum);
+ goto out_drop;
+ }
+
+ memset(pkthdr, 0, sizeof(*pkthdr));
+ switch (tx_ring->product_type) {
+ case NBL_LEONIS_TYPE:
+ nbl_tx_fill_tx_extend_header_leonis(pkthdr, hdr_param);
+ break;
+ default:
+ netdev_err(tx_ring->netdev,
+ "fill tx extend header failed, product type: %d, eth: %u.\n",
+ tx_ring->product_type, hdr_param->dport_id);
+ goto out_drop;
+ }
+
+ pkthdr_len = sizeof(union nbl_tx_extend_head);
+
+ if (can_push) {
+ __skb_push(skb, pkthdr_len);
+ if (nbl_map_skb(tx_ring, skb, 1, &desc_index))
+ goto dma_map_error;
+ __skb_pull(skb, pkthdr_len);
+ } else {
+ hdrdma = dma_map_single(dma_dev, pkthdr, pkthdr_len,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(dma_dev, hdrdma)) {
+ tx_ring->tx_stats.tx_dma_busy++;
+ return NETDEV_TX_BUSY;
+ }
+
+ first_desc->addr = cpu_to_le64(hdrdma);
+ first_desc->len = cpu_to_le32(pkthdr_len);
+
+ first->dma = hdrdma;
+ first->len = pkthdr_len;
+
+ desc_index++;
+ if (desc_index == tx_ring->desc_num) {
+ desc_index = 0;
+ tx_ring->avail_used_flags ^=
+ 1 << NBL_PACKED_DESC_F_AVAIL |
+ 1 << NBL_PACKED_DESC_F_USED;
+ }
+ if (nbl_map_skb(tx_ring, skb, 0, &desc_index))
+ goto dma_map_error;
+ }
+
+ /* stats */
+ if (is_multicast_ether_addr(skb->data))
+ tx_ring->tx_stats.tx_multicast_packets += tso;
+ else
+ tx_ring->tx_stats.tx_unicast_packets += tso;
+
+ if (tso > 1) {
+ tx_ring->tx_stats.tso_packets++;
+ tx_ring->tx_stats.tso_bytes += skb->len;
+ }
+ tx_ring->tx_stats.tx_csum_packets += csum;
+ tmp = (desc_index == 0 ? tx_ring->desc_num : desc_index) - 1;
+ tx_desc = NBL_TX_DESC(tx_ring, tmp);
+ tx_desc->flags &= cpu_to_le16(~NBL_PACKED_DESC_F_NEXT);
+ len = le32_to_cpu(first_desc->len);
+ len += (hdr_param->total_hlen << NBL_TX_TOTAL_HEADERLEN_SHIFT);
+ first_desc->len = cpu_to_le32(len);
+ first_desc->id = cpu_to_le16(skb_shinfo(skb)->gso_size);
+
+ tx_ring->next_to_use = desc_index;
+ nbl_maybe_stop_tx(tx_ring, DESC_NEEDED);
+ if (nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA],
+ tx_ring->queue_index))
+ doorbell = __netdev_tx_sent_queue(txring_txq(tx_ring),
+ first->bytecount,
+ netdev_xmit_more());
+ /* wmb */
+ wmb();
+
+ first->next_to_watch = tx_desc;
+ /* first desc last set flag */
+ if (first_desc == tx_desc)
+ first_desc->flags = cpu_to_le16(avail_used_flags);
+ else
+ first_desc->flags =
+ cpu_to_le16(avail_used_flags | NBL_PACKED_DESC_F_NEXT);
+
+ /* kick doorbell passthrough for performace */
+ if (doorbell)
+ writel(tx_ring->notify_qid, tx_ring->notify_addr);
+
+ return NETDEV_TX_OK;
+
+dma_map_error:
+ while (desc_index != head) {
+ if (unlikely(!desc_index))
+ desc_index = tx_ring->desc_num;
+ desc_index--;
+ nbl_unmap_and_free_tx_resource(tx_ring,
+ NBL_TX_BUF(tx_ring, desc_index),
+ false, false);
+ }
+
+ tx_ring->avail_used_flags = avail_used_flags;
+ tx_ring->tx_stats.tx_dma_busy++;
+ return NETDEV_TX_BUSY;
+
+out_drop:
+ netdev_err(tx_ring->netdev, "tx_map, free_skb\n");
+ tx_ring->tx_stats.tx_skb_free++;
+ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+}
+
+static netdev_tx_t nbl_res_txrx_start_xmit(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct nbl_resource_mgt *res_mgt =
+ NBL_ADAP_TO_RES_MGT(NBL_NETDEV_TO_ADAPTER(netdev));
+ struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+ struct nbl_res_tx_ring *tx_ring =
+ txrx_mgt->tx_rings[skb_get_queue_mapping(skb)];
+ struct nbl_tx_hdr_param hdr_param = {
+ .mac_len = 14 >> 1,
+ .ip_len = 20 >> 2,
+ .l4_len = 20 >> 2,
+ .mss = 256,
+ };
+ u16 vlan_tci;
+ __be16 vlan_proto;
+ unsigned int count;
+ int ret = 0;
+
+ count = nbl_xmit_desc_count(skb);
+ /* we can not tranmit a packet with more than 32 descriptors */
+ WARN_ON(count > MAX_DESC_NUM_PER_PKT);
+ if (unlikely(nbl_maybe_stop_tx(tx_ring, count))) {
+ if (net_ratelimit())
+ dev_dbg(NBL_RING_TO_DEV(tx_ring),
+ "no desc to tx pkt in queue %u\n",
+ tx_ring->queue_index);
+ tx_ring->tx_stats.tx_busy++;
+ return NETDEV_TX_BUSY;
+ }
+
+ if (tx_ring->vlan_proto || skb_vlan_tag_present(skb)) {
+ if (tx_ring->vlan_proto) {
+ vlan_proto = htons(tx_ring->vlan_proto);
+ vlan_tci = tx_ring->vlan_tci;
+ }
+
+ if (skb_vlan_tag_present(skb)) {
+ vlan_proto = skb->vlan_proto;
+ vlan_tci = skb_vlan_tag_get(skb);
+ }
+
+ skb = vlan_insert_tag_set_proto(skb, vlan_proto, vlan_tci);
+ if (!skb)
+ return NETDEV_TX_OK;
+ }
+ /* for dstore and eth, min packet len is 60 */
+ if (unlikely(eth_skb_pad(skb)))
+ return NETDEV_TX_OK;
+
+ hdr_param.dport_id = tx_ring->eth_id;
+ hdr_param.fwd = 1;
+ hdr_param.rss_lag_en = 0;
+
+ if (nbl_skb_is_lacp_or_lldp(skb)) {
+ hdr_param.fwd = NBL_TX_FWD_TYPE_CPU_ASSIGNED;
+ hdr_param.dport = NBL_TX_DPORT_ETH;
+ }
+
+ /* for unicast packet tx_map all */
+ ret = nbl_tx_map(tx_ring, skb, &hdr_param);
+ return ret;
+}
+
+static void nbl_res_txrx_kick_rx_ring(void *priv, u16 index)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt);
+ struct nbl_notify_param notify_param = { 0 };
+ struct nbl_res_rx_ring *rx_ring =
+ NBL_RES_MGT_TO_RX_RING(res_mgt, index);
+
+ notify_param.notify_qid = rx_ring->notify_qid;
+ notify_param.tail_ptr = rx_ring->tail_ptr;
+ hw_ops->update_tail_ptr(NBL_RES_MGT_TO_HW_PRIV(res_mgt), ¬ify_param);
+}
+
+static struct nbl_napi_struct *nbl_res_txrx_get_vector_napi(void *priv,
+ u16 index)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+
+ if (!txrx_mgt->vectors || index >= txrx_mgt->rx_ring_num) {
+ nbl_err(common, "vectors not allocated\n");
+ return NULL;
+ }
+
+ return &txrx_mgt->vectors[index]->nbl_napi;
+}
+
+static void nbl_res_txrx_set_vector_info(void *priv,
+ u8 __iomem *irq_enable_base,
+ u32 irq_data, u16 index, bool mask_en)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+
+ if (!txrx_mgt->vectors || index >= txrx_mgt->rx_ring_num) {
+ nbl_err(common, "vectors not allocated\n");
+ return;
+ }
+
+ txrx_mgt->vectors[index]->irq_enable_base = irq_enable_base;
+ txrx_mgt->vectors[index]->irq_data = irq_data;
+ txrx_mgt->vectors[index]->net_msix_mask_en = mask_en;
+}
+
+static void nbl_res_get_pt_ops(void *priv, struct nbl_resource_pt_ops *pt_ops)
+{
+ pt_ops->start_xmit = nbl_res_txrx_start_xmit;
+}
+
+static u32 nbl_res_txrx_get_tx_headroom(void *priv)
+{
+ return sizeof(union nbl_tx_extend_head);
+}
+
+static void nbl_res_txrx_get_net_stats(void *priv, struct nbl_stats *net_stats)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+ struct nbl_res_rx_ring *rx_ring;
+ struct nbl_res_tx_ring *tx_ring;
+ int i;
+ u64 bytes = 0, packets = 0;
+ u64 tso_packets = 0, tso_bytes = 0;
+ u64 tx_csum_packets = 0;
+ u64 rx_csum_packets = 0, rx_csum_errors = 0;
+ u64 tx_multicast_packets = 0, tx_unicast_packets = 0;
+ u64 rx_multicast_packets = 0, rx_unicast_packets = 0;
+ u64 tx_busy = 0, tx_dma_busy = 0;
+ u64 tx_desc_addr_err_cnt = 0;
+ u64 tx_desc_len_err_cnt = 0;
+ u64 rx_desc_addr_err_cnt = 0;
+ u64 rx_alloc_buf_err_cnt = 0;
+ u64 rx_cache_reuse = 0;
+ u64 rx_cache_full = 0;
+ u64 rx_cache_empty = 0;
+ u64 rx_cache_busy = 0;
+ u64 rx_cache_waive = 0;
+ u64 tx_skb_free = 0;
+ unsigned int start;
+
+ rcu_read_lock();
+ for (i = 0; i < txrx_mgt->rx_ring_num; i++) {
+ rx_ring = NBL_RES_MGT_TO_RX_RING(res_mgt, i);
+ do {
+ start = u64_stats_fetch_begin(&rx_ring->syncp);
+ bytes += rx_ring->stats.bytes;
+ packets += rx_ring->stats.packets;
+ rx_csum_packets += rx_ring->rx_stats.rx_csum_packets;
+ rx_csum_errors += rx_ring->rx_stats.rx_csum_errors;
+ rx_multicast_packets +=
+ rx_ring->rx_stats.rx_multicast_packets;
+ rx_unicast_packets +=
+ rx_ring->rx_stats.rx_unicast_packets;
+ rx_desc_addr_err_cnt +=
+ rx_ring->rx_stats.rx_desc_addr_err_cnt;
+ rx_alloc_buf_err_cnt +=
+ rx_ring->rx_stats.rx_alloc_buf_err_cnt;
+ rx_cache_reuse += rx_ring->rx_stats.rx_cache_reuse;
+ rx_cache_full += rx_ring->rx_stats.rx_cache_full;
+ rx_cache_empty += rx_ring->rx_stats.rx_cache_empty;
+ rx_cache_busy += rx_ring->rx_stats.rx_cache_busy;
+ rx_cache_waive += rx_ring->rx_stats.rx_cache_waive;
+ } while (u64_stats_fetch_retry(&rx_ring->syncp, start));
+ }
+
+ net_stats->rx_packets = packets;
+ net_stats->rx_bytes = bytes;
+
+ net_stats->rx_csum_packets = rx_csum_packets;
+ net_stats->rx_csum_errors = rx_csum_errors;
+ net_stats->rx_multicast_packets = rx_multicast_packets;
+ net_stats->rx_unicast_packets = rx_unicast_packets;
+
+ bytes = 0;
+ packets = 0;
+
+ for (i = 0; i < txrx_mgt->tx_ring_num; i++) {
+ tx_ring = NBL_RES_MGT_TO_TX_RING(res_mgt, i);
+ do {
+ start = u64_stats_fetch_begin(&tx_ring->syncp);
+ bytes += tx_ring->stats.bytes;
+ packets += tx_ring->stats.packets;
+ tso_packets += tx_ring->tx_stats.tso_packets;
+ tso_bytes += tx_ring->tx_stats.tso_bytes;
+ tx_csum_packets += tx_ring->tx_stats.tx_csum_packets;
+ tx_busy += tx_ring->tx_stats.tx_busy;
+ tx_dma_busy += tx_ring->tx_stats.tx_dma_busy;
+ tx_multicast_packets +=
+ tx_ring->tx_stats.tx_multicast_packets;
+ tx_unicast_packets +=
+ tx_ring->tx_stats.tx_unicast_packets;
+ tx_skb_free += tx_ring->tx_stats.tx_skb_free;
+ tx_desc_addr_err_cnt +=
+ tx_ring->tx_stats.tx_desc_addr_err_cnt;
+ tx_desc_len_err_cnt +=
+ tx_ring->tx_stats.tx_desc_len_err_cnt;
+ } while (u64_stats_fetch_retry(&tx_ring->syncp, start));
+ }
+
+ rcu_read_unlock();
+
+ net_stats->tx_bytes = bytes;
+ net_stats->tx_packets = packets;
+ net_stats->tso_packets = tso_packets;
+ net_stats->tso_bytes = tso_bytes;
+ net_stats->tx_csum_packets = tx_csum_packets;
+ net_stats->tx_busy = tx_busy;
+ net_stats->tx_dma_busy = tx_dma_busy;
+ net_stats->tx_multicast_packets = tx_multicast_packets;
+ net_stats->tx_unicast_packets = tx_unicast_packets;
+ net_stats->tx_skb_free = tx_skb_free;
+ net_stats->tx_desc_addr_err_cnt = tx_desc_addr_err_cnt;
+ net_stats->tx_desc_len_err_cnt = tx_desc_len_err_cnt;
+ net_stats->rx_desc_addr_err_cnt = rx_desc_addr_err_cnt;
+ net_stats->rx_alloc_buf_err_cnt = rx_alloc_buf_err_cnt;
+ net_stats->rx_cache_reuse = rx_cache_reuse;
+ net_stats->rx_cache_full = rx_cache_full;
+ net_stats->rx_cache_empty = rx_cache_empty;
+ net_stats->rx_cache_busy = rx_cache_busy;
+ net_stats->rx_cache_waive = rx_cache_waive;
+}
+
+/* NBL_TXRX_SET_OPS(ops_name, func)
+ *
+ * Use X Macros to reduce setup and remove codes.
+ */
+#define NBL_TXRX_OPS_TBL \
+do { \
+ NBL_TXRX_SET_OPS(get_resource_pt_ops, nbl_res_get_pt_ops); \
+ NBL_TXRX_SET_OPS(alloc_rings, nbl_res_txrx_alloc_rings); \
+ NBL_TXRX_SET_OPS(remove_rings, nbl_res_txrx_remove_rings); \
+ NBL_TXRX_SET_OPS(start_tx_ring, nbl_res_txrx_start_tx_ring); \
+ NBL_TXRX_SET_OPS(stop_tx_ring, nbl_res_txrx_stop_tx_ring); \
+ NBL_TXRX_SET_OPS(start_rx_ring, nbl_res_txrx_start_rx_ring); \
+ NBL_TXRX_SET_OPS(stop_rx_ring, nbl_res_txrx_stop_rx_ring); \
+ NBL_TXRX_SET_OPS(kick_rx_ring, nbl_res_txrx_kick_rx_ring); \
+ NBL_TXRX_SET_OPS(get_vector_napi, \
+ nbl_res_txrx_get_vector_napi); \
+ NBL_TXRX_SET_OPS(set_vector_info, \
+ nbl_res_txrx_set_vector_info); \
+ NBL_TXRX_SET_OPS(get_tx_headroom, \
+ nbl_res_txrx_get_tx_headroom); \
+ NBL_TXRX_SET_OPS(get_net_stats, nbl_res_txrx_get_net_stats); \
+ NBL_TXRX_SET_OPS(register_vsi_ring, \
+ nbl_txrx_register_vsi_ring); \
+} while (0)
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_txrx_setup_mgt(struct device *dev,
+ struct nbl_txrx_mgt **txrx_mgt)
+{
+ *txrx_mgt = devm_kzalloc(dev, sizeof(struct nbl_txrx_mgt), GFP_KERNEL);
+ if (!*txrx_mgt)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void nbl_txrx_remove_mgt(struct device *dev,
+ struct nbl_txrx_mgt **txrx_mgt)
+{
+ devm_kfree(dev, *txrx_mgt);
+ *txrx_mgt = NULL;
+}
+
+int nbl_txrx_mgt_start(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev;
+ struct nbl_txrx_mgt **txrx_mgt;
+
+ dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ txrx_mgt = &NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+
+ return nbl_txrx_setup_mgt(dev, txrx_mgt);
+}
+
+void nbl_txrx_mgt_stop(struct nbl_resource_mgt *res_mgt)
+{
+ struct device *dev;
+ struct nbl_txrx_mgt **txrx_mgt;
+
+ dev = NBL_RES_MGT_TO_DEV(res_mgt);
+ txrx_mgt = &NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+
+ if (!(*txrx_mgt))
+ return;
+
+ nbl_txrx_remove_mgt(dev, txrx_mgt);
+}
+
+int nbl_txrx_setup_ops(struct nbl_resource_ops *res_ops)
+{
+#define NBL_TXRX_SET_OPS(name, func) \
+ do { \
+ res_ops->NBL_NAME(name) = func; \
+ ; \
+ } while (0)
+ NBL_TXRX_OPS_TBL;
+#undef NBL_TXRX_SET_OPS
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h
new file mode 100644
index 000000000000..de11f30a8210
--- /dev/null
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: GPL-2.0*/
+/*
+ * Copyright (c) 2025 Nebula Matrix Limited.
+ * Author:
+ */
+
+#ifndef _NBL_TXRX_H_
+#define _NBL_TXRX_H_
+
+#include "nbl_resource.h"
+
+#define NBL_RING_TO_COMMON(ring) ((ring)->common)
+#define NBL_RING_TO_DEV(ring) ((ring)->dma_dev)
+#define NBL_RING_TO_DMA_DEV(ring) ((ring)->dma_dev)
+
+#define NBL_MIN_DESC_NUM 128
+#define NBL_MAX_DESC_NUM 32768
+
+#define NBL_PACKED_DESC_F_NEXT 1
+#define NBL_PACKED_DESC_F_WRITE 2
+#define NBL_PACKED_DESC_F_AVAIL 7
+#define NBL_PACKED_DESC_F_USED 15
+
+#define NBL_TX_DESC(tx_ring, i) (&(((tx_ring)->desc)[i]))
+#define NBL_RX_DESC(rx_ring, i) (&(((rx_ring)->desc)[i]))
+#define NBL_TX_BUF(tx_ring, i) (&(((tx_ring)->tx_bufs)[i]))
+#define NBL_RX_BUF(rx_ring, i) (&(((rx_ring)->rx_bufs)[i]))
+
+#define NBL_RX_BUF_256 256
+#define NBL_RX_HDR_SIZE NBL_RX_BUF_256
+#define NBL_BUFFER_HDR_LEN (sizeof(struct nbl_rx_extend_head))
+#define NBL_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD)
+#define NBL_RX_BUFSZ 2048
+#define NBL_RX_DMA_ATTR (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
+
+#define NBL_TX_TOTAL_HEADERLEN_SHIFT 24
+#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
+#define NBL_TX_POLL_WEIGHT 256
+#define NBL_TXD_DATALEN_BITS 16
+#define NBL_TXD_DATALEN_MAX BIT(NBL_TXD_DATALEN_BITS)
+#define MAX_DESC_NUM_PER_PKT (32)
+
+#define IP_VERSION_V4 (4)
+#define NBL_TX_FLAGS_TSO BIT(0)
+
+/* TX inner IP header type */
+enum nbl_tx_iipt {
+ NBL_TX_IIPT_NONE = 0x0,
+ NBL_TX_IIPT_IPV6 = 0x1,
+ NBL_TX_IIPT_IPV4 = 0x2,
+ NBL_TX_IIPT_RSV = 0x3
+};
+
+/* TX L4 packet type */
+enum nbl_tx_l4t {
+ NBL_TX_L4T_NONE = 0x0,
+ NBL_TX_L4T_TCP = 0x1,
+ NBL_TX_L4T_UDP = 0x2,
+ NBL_TX_L4T_RSV = 0x3
+};
+
+struct nbl_tx_hdr_param {
+ u8 l4s_pbrac_mode;
+ u8 l4s_hdl_ind;
+ u8 l4s_sync_ind;
+ u8 tso;
+ u16 l4s_sid;
+ u16 mss;
+ u8 mac_len;
+ u8 ip_len;
+ u8 l4_len;
+ u8 l4_type;
+ u8 inner_ip_type;
+ u8 l3_csum_en;
+ u8 l4_csum_en;
+ u16 total_hlen;
+ u16 dport_id:10;
+ u16 fwd:2;
+ u16 dport:3;
+ u16 rss_lag_en:1;
+};
+
+union nbl_tx_extend_head {
+ struct {
+ /* DW0 */
+ u32 mac_len :5;
+ u32 ip_len :5;
+ u32 l4_len :4;
+ u32 l4_type :2;
+ u32 inner_ip_type :2;
+ u32 external_ip_type :2;
+ u32 external_ip_len :5;
+ u32 l4_tunnel_type :2;
+ u32 l4_tunnel_len :5;
+ /* DW1 */
+ u32 l4s_sid :10;
+ u32 l4s_sync_ind :1;
+ u32 l4s_redun_ind :1;
+ u32 l4s_redun_head_ind :1;
+ u32 l4s_hdl_ind :1;
+ u32 l4s_pbrac_mode :1;
+ u32 rsv0 :2;
+ u32 mss :14;
+ u32 tso :1;
+ /* DW2 */
+ /* if dport = NBL_TX_DPORT_ETH; dport_info = 0
+ * if dport = NBL_TX_DPORT_HOST; dport_info = host queue id
+ * if dport = NBL_TX_DPORT_ECPU; dport_info = ecpu queue_id
+ */
+ u32 dport_info :11;
+ /* if dport = NBL_TX_DPORT_ETH; dport_id[3:0] = eth port id,
+ * dport_id[9:4] = lag id
+ * if dport = NBL_TX_DPORT_HOST; dport_id[9:0] = host vsi_id
+ * if dport = NBL_TX_DPORT_ECPU; dport_id[9:0] = ecpu vsi_id
+ */
+ u32 dport_id :10;
+#define NBL_TX_DPORT_ID_LAG_OFFSET (4)
+ u32 dport :3;
+#define NBL_TX_DPORT_ETH (0)
+#define NBL_TX_DPORT_HOST (1)
+#define NBL_TX_DPORT_ECPU (2)
+#define NBL_TX_DPORT_EMP (3)
+#define NBL_TX_DPORT_BMC (4)
+ u32 fwd :2;
+#define NBL_TX_FWD_TYPE_DROP (0)
+#define NBL_TX_FWD_TYPE_NORMAL (1)
+#define NBL_TX_FWD_TYPE_RSV (2)
+#define NBL_TX_FWD_TYPE_CPU_ASSIGNED (3)
+ u32 rss_lag_en :1;
+ u32 l4_csum_en :1;
+ u32 l3_csum_en :1;
+ u32 rsv1 :3;
+ };
+ u32 dw[3];
+};
+
+struct nbl_rx_extend_head {
+ /* DW0 */
+ /* 0x0:eth, 0x1:host, 0x2:ecpu, 0x3:emp, 0x4:bcm */
+ uint32_t sport :3;
+ uint32_t dport_info :11;
+ /* sport = 0, sport_id[3:0] = eth id,
+ * sport = 1, sport_id[9:0] = host vsi_id,
+ * sport = 2, sport_id[9:0] = ecpu vsi_id,
+ */
+ uint32_t sport_id :10;
+ /* 0x0:drop, 0x1:normal, 0x2:cpu upcall */
+ uint32_t fwd :2;
+ uint32_t rsv0 :6;
+ /* DW1 */
+ uint32_t error_code :6;
+ uint32_t ptype :10;
+ uint32_t profile_id :4;
+ uint32_t checksum_status :1;
+ uint32_t rsv1 :1;
+ uint32_t l4s_sid :10;
+ /* DW2 */
+ uint32_t rsv3 :2;
+ uint32_t l4s_hdl_ind :1;
+ uint32_t l4s_tcp_offset :14;
+ uint32_t l4s_resync_ind :1;
+ uint32_t l4s_check_ind :1;
+ uint32_t l4s_dec_ind :1;
+ uint32_t rsv2 :4;
+ uint32_t num_buffers :8;
+} __packed;
+
+static inline u16 nbl_unused_rx_desc_count(struct nbl_res_rx_ring *ring)
+{
+ u16 ntc = ring->next_to_clean;
+ u16 ntu = ring->next_to_use;
+
+ return ((ntc > ntu) ? 0 : ring->desc_num) + ntc - ntu - 1;
+}
+
+static inline u16 nbl_unused_tx_desc_count(struct nbl_res_tx_ring *ring)
+{
+ u16 ntc = ring->next_to_clean;
+ u16 ntu = ring->next_to_use;
+
+ return ((ntc > ntu) ? 0 : ring->desc_num) + ntc - ntu - 1;
+}
+
+#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
index 0252082c55dc..176cce5e08d6 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
@@ -85,6 +85,9 @@ struct nbl_hw_ops {
void (*enable_msix_irq)(void *priv, u16 global_vec_id);
u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id,
u32 *irq_data);
+ void (*update_tail_ptr)(void *priv, struct nbl_notify_param *param);
+ u8 __iomem *(*get_tail_ptr)(void *priv);
+
u32 (*get_fw_eth_num)(void *priv);
u32 (*get_fw_eth_map)(void *priv);
void (*get_board_info)(void *priv, struct nbl_board_port_info *board);
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
index 1379567fb847..92abf384ef70 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
@@ -18,7 +18,10 @@ struct nbl_resource_pt_ops {
struct nbl_resource_ops {
int (*init_chip_module)(void *priv);
void (*deinit_chip_module)(void *priv);
+ void (*get_resource_pt_ops)(void *priv,
+ struct nbl_resource_pt_ops *pt_ops);
int (*queue_init)(void *priv);
+
int (*configure_msix_map)(void *priv, u16 func_id, u16 num_net_msix,
u16 num_others_msix, bool net_msix_mask_en);
int (*destroy_msix_map)(void *priv, u16 func_id);
@@ -26,6 +29,21 @@ struct nbl_resource_ops {
bool enable_msix);
u16 (*get_global_vector)(void *priv, u16 vsi_id, u16 local_vec_id);
u16 (*get_msix_entry_id)(void *priv, u16 vsi_id, u16 local_vec_id);
+
+ int (*alloc_rings)(void *priv, struct net_device *netdev,
+ struct nbl_ring_param *param);
+ void (*remove_rings)(void *priv);
+ dma_addr_t (*start_tx_ring)(void *priv, u8 ring_index);
+ void (*stop_tx_ring)(void *priv, u8 ring_index);
+ dma_addr_t (*start_rx_ring)(void *priv, u8 ring_index, bool use_napi);
+ void (*stop_rx_ring)(void *priv, u8 ring_index);
+ void (*update_rx_ring)(void *priv, u16 index);
+ void (*kick_rx_ring)(void *priv, u16 index);
+ struct nbl_napi_struct *(*get_vector_napi)(void *priv, u16 index);
+ void (*set_vector_info)(void *priv, u8 __iomem *irq_enable_base,
+ u32 irq_data, u16 index, bool mask_en);
+ void (*register_vsi_ring)(void *priv, u16 vsi_index, u16 ring_offset,
+ u16 ring_num);
int (*register_net)(void *priv, u16 func_id,
struct nbl_register_net_param *register_param,
struct nbl_register_net_result *register_result);
@@ -51,10 +69,14 @@ struct nbl_resource_ops {
u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id,
u32 *irq_data);
+
u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type);
void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id,
u8 *logic_eth_id);
int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode);
+ u32 (*get_tx_headroom)(void *priv);
+
+ void (*get_net_stats)(void *priv, struct nbl_stats *queue_stats);
void (*get_rxfh_indir_size)(void *priv, u16 vsi_id,
u32 *rxfh_indir_size);
int (*set_rxfh_indir)(void *priv, u16 vsi_id, const u32 *indir,
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index d4e18f2a2931..6d57391f85f9 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -42,6 +42,11 @@ enum {
NBL_VSI_MAX,
};
+enum {
+ NBL_TX = 0,
+ NBL_RX,
+};
+
struct nbl_func_caps {
u32 has_ctrl:1;
u32 has_net:1;
@@ -120,6 +125,11 @@ struct nbl_queue_cfg_param {
u16 half_offload_en;
};
+struct nbl_msix_info_param {
+ u16 msix_num;
+ struct msix_entry *msix_entries;
+};
+
struct nbl_queue_stats {
u64 packets;
u64 bytes;
@@ -182,6 +192,11 @@ struct nbl_stats {
u64 rx_bytes;
};
+struct nbl_notify_param {
+ u16 notify_qid;
+ u16 tail_ptr;
+};
+
enum nbl_fw_port_speed {
NBL_FW_PORT_SPEED_10G,
NBL_FW_PORT_SPEED_25G,
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 10/15] net/nebula-matrix: add txrx interrupt handling logic
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (8 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 09/15] net/nebula-matrix: add txrx resource interfaces and tx func illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 11/15] net/nebula-matrix: add Dispatch layer implementation illusion.wang
` (5 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
This patch implements the TX and RX interrupt handling logic:
Transmit (TX) Handling:
nbl_res_txrx_clean_tx_irq: Cleans up completed transmit descriptors,
frees resources, and updates statistics.
Checks if descriptors are used (via nbl_ring_desc_used).
Updates TX packet count, byte count, and descriptor count.
Wakes up the TX queue if necessary (netif_wake_subqueue).
Receive (RX) Handling:
nbl_res_txrx_clean_rx_irq: Processes received packets, constructs
sk_buff, and passes them to the upper layer.
Checks if descriptors are used.
Retrieves packet length from the descriptor and gets the RX buffer.
Constructs sk_buff (nbl_construct_skb), handles checksum (nbl_rx_csum),
and VLAN tags (nbl_rx_vlan_pop).
Reallocates RX buffers if necessary (nbl_alloc_rx_bufs).
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../nebula-matrix/nbl/nbl_hw/nbl_txrx.c | 371 ++++++++++++++++++
1 file changed, 371 insertions(+)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c
index 939166a074e2..7008402751a7 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c
@@ -835,6 +835,166 @@ static void nbl_res_txrx_stop_rx_ring(void *priv, u8 ring_index)
nbl_debug(res_mgt->common, "Stop rx ring %d", ring_index);
}
+static __always_inline bool nbl_ring_desc_used(struct nbl_ring_desc *ring_desc,
+ bool used_wrap_counter)
+{
+ bool avail;
+ bool used;
+ u16 flags;
+
+ flags = le16_to_cpu(ring_desc->flags);
+ avail = !!(flags & BIT(NBL_PACKED_DESC_F_AVAIL));
+ used = !!(flags & BIT(NBL_PACKED_DESC_F_USED));
+
+ return avail == used && used == used_wrap_counter;
+}
+
+static int nbl_res_txrx_clean_tx_irq(struct nbl_res_tx_ring *tx_ring)
+{
+ struct nbl_tx_buffer *tx_buffer;
+ struct nbl_ring_desc *tx_desc;
+ unsigned int i = tx_ring->next_to_clean;
+ unsigned int total_tx_pkts = 0;
+ unsigned int total_tx_bytes = 0;
+ unsigned int total_tx_descs = 0;
+ int count = 64;
+
+ tx_buffer = NBL_TX_BUF(tx_ring, i);
+ tx_desc = NBL_TX_DESC(tx_ring, i);
+ i -= tx_ring->desc_num;
+
+ do {
+ struct nbl_ring_desc *end_desc = tx_buffer->next_to_watch;
+
+ if (!end_desc)
+ break;
+
+ /* smp_rmb */
+ smp_rmb();
+
+ if (!nbl_ring_desc_used(tx_desc, tx_ring->used_wrap_counter))
+ break;
+
+ total_tx_pkts += tx_buffer->gso_segs;
+ total_tx_bytes += tx_buffer->bytecount;
+
+ while (true) {
+ total_tx_descs++;
+ nbl_unmap_and_free_tx_resource(tx_ring, tx_buffer, true,
+ true);
+ if (tx_desc == end_desc)
+ break;
+ i++;
+ tx_buffer++;
+ tx_desc++;
+ if (unlikely(!i)) {
+ i -= tx_ring->desc_num;
+ tx_buffer = NBL_TX_BUF(tx_ring, 0);
+ tx_desc = NBL_TX_DESC(tx_ring, 0);
+ tx_ring->used_wrap_counter ^= 1;
+ }
+ }
+
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->desc_num;
+ tx_buffer = NBL_TX_BUF(tx_ring, 0);
+ tx_desc = NBL_TX_DESC(tx_ring, 0);
+ tx_ring->used_wrap_counter ^= 1;
+ }
+
+ prefetch(tx_desc);
+
+ } while (--count);
+
+ i += tx_ring->desc_num;
+
+ tx_ring->next_to_clean = i;
+
+ u64_stats_update_begin(&tx_ring->syncp);
+ tx_ring->stats.bytes += total_tx_bytes;
+ tx_ring->stats.packets += total_tx_pkts;
+ tx_ring->stats.descs += total_tx_descs;
+ u64_stats_update_end(&tx_ring->syncp);
+ if (nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA],
+ tx_ring->queue_index))
+ netdev_tx_completed_queue(txring_txq(tx_ring), total_tx_pkts,
+ total_tx_bytes);
+
+#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2)
+ if (unlikely(total_tx_pkts && netif_carrier_ok(tx_ring->netdev) &&
+ nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA],
+ tx_ring->queue_index) &&
+ (nbl_unused_tx_desc_count(tx_ring) >= TX_WAKE_THRESHOLD))) {
+ /* Make sure that anybody stopping the queue after this
+ * sees the new next_to_clean.
+ */
+ smp_mb();
+
+ if (__netif_subqueue_stopped(tx_ring->netdev,
+ tx_ring->queue_index)) {
+ netif_wake_subqueue(tx_ring->netdev,
+ tx_ring->queue_index);
+ dev_dbg(NBL_RING_TO_DEV(tx_ring), "wake queue %u\n",
+ tx_ring->queue_index);
+ }
+ }
+
+ return count;
+}
+
+static void nbl_rx_csum(struct nbl_res_rx_ring *rx_ring, struct sk_buff *skb,
+ struct nbl_rx_extend_head *hdr)
+{
+ skb->ip_summed = CHECKSUM_NONE;
+ skb_checksum_none_assert(skb);
+
+ /* if user disable rx csum Offload, then stack verify the rx csum */
+ if (!(rx_ring->netdev->features & NETIF_F_RXCSUM))
+ return;
+
+ if (!hdr->checksum_status)
+ return;
+
+ if (hdr->error_code) {
+ rx_ring->rx_stats.rx_csum_errors++;
+ return;
+ }
+
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ rx_ring->rx_stats.rx_csum_packets++;
+}
+
+static __always_inline void nbl_add_rx_frag(struct nbl_rx_buffer *rx_buffer,
+ struct sk_buff *skb,
+ unsigned int size)
+{
+ page_ref_inc(rx_buffer->di->page);
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->di->page,
+ rx_buffer->offset, size, rx_buffer->size);
+}
+
+static __always_inline int nbl_rx_vlan_pop(struct nbl_res_rx_ring *rx_ring,
+ struct sk_buff *skb)
+{
+ struct vlan_ethhdr *veth = (struct vlan_ethhdr *)skb->data;
+
+ if (!rx_ring->vlan_proto)
+ return 0;
+
+ if (rx_ring->vlan_proto != ntohs(veth->h_vlan_proto) ||
+ (rx_ring->vlan_tci & VLAN_VID_MASK) !=
+ (ntohs(veth->h_vlan_TCI) & VLAN_VID_MASK))
+ return 1;
+
+ memmove(skb->data + VLAN_HLEN, skb->data, 2 * ETH_ALEN);
+ __skb_pull(skb, VLAN_HLEN);
+
+ return 0;
+}
+
static void nbl_txrx_register_vsi_ring(void *priv, u16 vsi_index,
u16 ring_offset, u16 ring_num)
{
@@ -845,6 +1005,92 @@ static void nbl_txrx_register_vsi_ring(void *priv, u16 vsi_index,
txrx_mgt->vsi_info[vsi_index].ring_num = ring_num;
}
+/*
+ * Current version support merging multiple descriptor for one packet.
+ */
+static struct sk_buff *nbl_construct_skb(struct nbl_res_rx_ring *rx_ring,
+ struct napi_struct *napi,
+ struct nbl_rx_buffer *rx_buf,
+ unsigned int size)
+{
+ struct sk_buff *skb;
+ char *p, *buf;
+ int tailroom,
+ shinfo_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ unsigned int truesize = rx_buf->size;
+ unsigned int headlen;
+
+ p = page_address(rx_buf->di->page) + rx_buf->offset;
+ buf = p - NBL_RX_PAD;
+ p += NBL_BUFFER_HDR_LEN;
+ tailroom = truesize - size - NBL_RX_PAD;
+ size -= NBL_BUFFER_HDR_LEN;
+
+ if (rx_ring->linear_skb && tailroom >= shinfo_size) {
+ skb = build_skb(buf, truesize);
+ if (unlikely(!skb))
+ return NULL;
+
+ page_ref_inc(rx_buf->di->page);
+ skb_reserve(skb, p - buf);
+ skb_put(skb, size);
+ goto ok;
+ }
+
+ skb = napi_alloc_skb(napi, NBL_RX_HDR_SIZE);
+ if (unlikely(!skb))
+ return NULL;
+
+ headlen = size;
+ if (headlen > NBL_RX_HDR_SIZE)
+ headlen = eth_get_headlen(skb->dev, p, NBL_RX_HDR_SIZE);
+
+ memcpy(__skb_put(skb, headlen), p, ALIGN(headlen, sizeof(long)));
+ size -= headlen;
+ if (size) {
+ page_ref_inc(rx_buf->di->page);
+ skb_add_rx_frag(skb, 0, rx_buf->di->page,
+ rx_buf->offset + NBL_BUFFER_HDR_LEN + headlen,
+ size, truesize);
+ }
+ok:
+ skb_record_rx_queue(skb, rx_ring->queue_index);
+
+ return skb;
+}
+
+static __always_inline struct nbl_rx_buffer *
+nbl_get_rx_buf(struct nbl_res_rx_ring *rx_ring)
+{
+ struct nbl_rx_buffer *rx_buf;
+
+ rx_buf = NBL_RX_BUF(rx_ring, rx_ring->next_to_clean);
+ prefetchw(rx_buf->di->page);
+
+ dma_sync_single_range_for_cpu(rx_ring->dma_dev, rx_buf->di->addr,
+ rx_buf->offset, rx_ring->buf_len,
+ DMA_FROM_DEVICE);
+
+ return rx_buf;
+}
+
+static __always_inline void nbl_put_rx_buf(struct nbl_res_rx_ring *rx_ring,
+ struct nbl_rx_buffer *rx_buf)
+{
+ u16 ntc = rx_ring->next_to_clean + 1;
+
+ /* if at the end of the ring, reset ntc and flip used wrap bit */
+ if (unlikely(ntc >= rx_ring->desc_num)) {
+ ntc = 0;
+ rx_ring->used_wrap_counter ^= 1;
+ }
+
+ rx_ring->next_to_clean = ntc;
+ prefetch(NBL_RX_DESC(rx_ring, ntc));
+
+ nbl_put_rx_frag(rx_ring, rx_buf, true);
+}
+
static __always_inline int nbl_maybe_stop_tx(struct nbl_res_tx_ring *tx_ring,
unsigned int size)
{
@@ -874,6 +1120,130 @@ static __always_inline int nbl_maybe_stop_tx(struct nbl_res_tx_ring *tx_ring,
return 0;
}
+static int nbl_res_txrx_clean_rx_irq(struct nbl_res_rx_ring *rx_ring,
+ struct napi_struct *napi, int budget)
+{
+ struct nbl_ring_desc *rx_desc;
+ struct nbl_rx_buffer *rx_buf;
+ struct nbl_rx_extend_head *hdr;
+ struct sk_buff *skb = NULL;
+ unsigned int total_rx_pkts = 0;
+ unsigned int total_rx_bytes = 0;
+ unsigned int size;
+ u32 rx_multicast_packets = 0;
+ u32 rx_unicast_packets = 0;
+ u16 desc_count = 0;
+ u16 num_buffers = 0;
+ u16 cleaned_count = nbl_unused_rx_desc_count(rx_ring);
+ bool failure = 0;
+ bool drop = 0;
+ u16 tmp;
+
+ while (likely(total_rx_pkts < budget)) {
+ rx_desc = NBL_RX_DESC(rx_ring, rx_ring->next_to_clean);
+ if (!nbl_ring_desc_used(rx_desc, rx_ring->used_wrap_counter))
+ break;
+
+ dma_rmb();
+ size = le32_to_cpu(rx_desc->len);
+ rx_buf = nbl_get_rx_buf(rx_ring);
+
+ desc_count++;
+
+ if (skb) {
+ nbl_add_rx_frag(rx_buf, skb, size);
+ } else {
+ hdr = page_address(rx_buf->di->page) + rx_buf->offset;
+ net_prefetch(hdr);
+ skb = nbl_construct_skb(rx_ring, napi, rx_buf, size);
+ if (unlikely(!skb)) {
+ rx_ring->rx_stats.rx_alloc_buf_err_cnt++;
+ break;
+ }
+
+ num_buffers = (u16)hdr->num_buffers;
+ nbl_rx_csum(rx_ring, skb, hdr);
+ drop = nbl_rx_vlan_pop(rx_ring, skb);
+ }
+
+ cleaned_count++;
+ nbl_put_rx_buf(rx_ring, rx_buf);
+ if (desc_count < num_buffers)
+ continue;
+ desc_count = 0;
+
+ if (unlikely(eth_skb_pad(skb))) {
+ skb = NULL;
+ drop = 0;
+ continue;
+ }
+
+ if (unlikely(drop)) {
+ dev_kfree_skb_any(skb);
+ skb = NULL;
+ drop = 0;
+ continue;
+ }
+
+ total_rx_bytes += skb->len;
+ skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+ if (unlikely(skb->pkt_type == PACKET_BROADCAST ||
+ skb->pkt_type == PACKET_MULTICAST))
+ rx_multicast_packets++;
+ else
+ rx_unicast_packets++;
+
+ napi_gro_receive(napi, skb);
+ skb = NULL;
+ drop = 0;
+ total_rx_pkts++;
+ }
+ tmp = cleaned_count & (~(NBL_MAX_BATCH_DESC - 1));
+ if (tmp)
+ failure = nbl_alloc_rx_bufs(rx_ring, tmp);
+
+ u64_stats_update_begin(&rx_ring->syncp);
+ rx_ring->stats.packets += total_rx_pkts;
+ rx_ring->stats.bytes += total_rx_bytes;
+ rx_ring->rx_stats.rx_multicast_packets += rx_multicast_packets;
+ rx_ring->rx_stats.rx_unicast_packets += rx_unicast_packets;
+ u64_stats_update_end(&rx_ring->syncp);
+
+ return failure ? budget : total_rx_pkts;
+}
+
+static int nbl_res_napi_poll(struct napi_struct *napi, int budget)
+{
+ struct nbl_napi_struct *nbl_napi =
+ container_of(napi, struct nbl_napi_struct, napi);
+ struct nbl_res_vector *vector =
+ container_of(nbl_napi, struct nbl_res_vector, nbl_napi);
+ struct nbl_res_tx_ring *tx_ring;
+ struct nbl_res_rx_ring *rx_ring;
+ int complete = 1, cleaned = 0, tx_done = 1;
+
+ tx_ring = vector->tx_ring;
+ rx_ring = vector->rx_ring;
+
+ if (vector->started) {
+ tx_done = nbl_res_txrx_clean_tx_irq(tx_ring);
+ cleaned = nbl_res_txrx_clean_rx_irq(rx_ring, napi, budget);
+ }
+ complete = tx_done && (cleaned < budget);
+ if (!complete)
+ return budget;
+
+ if (!napi_complete_done(napi, cleaned))
+ return min_t(int, cleaned, budget - 1);
+
+ /* unmask irq passthrough for performace */
+ if (vector->net_msix_mask_en)
+ writel(vector->irq_data,
+ (void __iomem *)vector->irq_enable_base);
+
+ return min_t(int, cleaned, budget - 1);
+}
+
static unsigned int nbl_xmit_desc_count(struct sk_buff *skb)
{
unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -1416,6 +1786,7 @@ static void nbl_res_txrx_set_vector_info(void *priv,
static void nbl_res_get_pt_ops(void *priv, struct nbl_resource_pt_ops *pt_ops)
{
pt_ops->start_xmit = nbl_res_txrx_start_xmit;
+ pt_ops->napi_poll = nbl_res_napi_poll;
}
static u32 nbl_res_txrx_get_tx_headroom(void *priv)
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 11/15] net/nebula-matrix: add Dispatch layer implementation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (9 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 10/15] net/nebula-matrix: add txrx interrupt handling logic illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 13/15] net/nebula-matrix: add net dev init/reinit operation illusion.wang
` (4 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
This patch can have a quick review, function is similar
The primary issue in the Dispatch layer is how to determine
the routing approach
Two routing ways:
Dispatch Layer-> Resource Layer -> HW layer
The Dispatch Layer routes tasks to Resource Layer, which may interact
with the HW Layer for hardware writes.
Dispatch Layer->Channel Layer
The Dispatch Layers redirects hooks to the Channel Layer.
The primary challenge at the Dispatch layer lies in determining the
routing approach, namely, how to decide which interfaces should directly
invoke the Resource layer's interfaces and which should transmit
requests via channels to the management PF for processing.
To address this, a ctrl_lvl (control level) mechanism is established,
which comprises two parts: the control level declared by each interface
and the control level configured by the upper layer. The effect is that
when the upper layer configures a specific control level, all interfaces
declaring this level will directly call the Resource layer's interfaces;
otherwise, they will send requests via channels.
For instance, consider a regular PF that possesses network (net)
capabilities but lacks control (ctrl) capabilities. It will only
configure NET_LVL at the Dispatch layer. In this scenario, all
interfaces declaring NET_LVL will directly invoke the Resource layer's
interfaces, while those declaring CTRL_LVL will send requests via
channels to the management PF. Conversely, if it is the management PF,
it will configure both NET_LVL and CTRL_LVL at the Dispatch layer.
Consequently, interfaces declaring CTRL_LVL will also directly call the
Resource layer's interfaces without sending requests via channels. This
configuration logic can be dynamic.(nbl_disp_setup_ctrl_lvl)
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../nebula-matrix/nbl/nbl_core/nbl_dispatch.c | 1863 +++++++++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_dispatch.h | 33 +
.../nbl/nbl_include/nbl_def_dispatch.h | 58 +
3 files changed, 1954 insertions(+)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c
index 51cb298ffb1a..181ec3796819 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c
@@ -6,6 +6,1846 @@
#include <linux/etherdevice.h>
#include "nbl_dispatch.h"
+static int
+nbl_disp_chan_register_net_req(void *priv,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_register_net_info param = {0};
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+ int ret;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ param.pf_bar_start = register_param->pf_bar_start;
+ param.pf_bdf = register_param->pf_bdf;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REGISTER_NET,
+ ¶m, sizeof(param), (void *)register_result,
+ sizeof(*register_result), 1);
+
+ ret = chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+ return ret;
+}
+
+static void nbl_disp_chan_register_net_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_register_net_info param;
+ struct nbl_register_net_result result = { 0 };
+ struct nbl_register_net_param register_param = { 0 };
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_ack_info chan_ack;
+ int copy_len;
+ int err = NBL_CHAN_RESP_OK;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ memset(¶m, 0, sizeof(struct nbl_chan_param_register_net_info));
+ copy_len = data_len < sizeof(struct nbl_chan_param_register_net_info) ?
+ data_len :
+ sizeof(struct nbl_chan_param_register_net_info);
+ memcpy(¶m, data, copy_len);
+
+ register_param.pf_bar_start = param.pf_bar_start;
+ register_param.pf_bdf = param.pf_bdf;
+
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->register_net, p, src_id,
+ ®ister_param, &result);
+ if (ret)
+ err = NBL_CHAN_RESP_ERR;
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REGISTER_NET, msg_id, err,
+ &result, sizeof(result));
+ ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_ack);
+ if (ret)
+ dev_err(dev,
+ "channel send ack failed with ret: %d, msg_type: %d, src_id:%d\n",
+ ret, NBL_CHAN_MSG_REGISTER_NET, src_id);
+}
+
+static int nbl_disp_unregister_net(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->unregister_net, p, 0);
+}
+
+static int nbl_disp_chan_unregister_net_req(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_UNREGISTER_NET,
+ NULL, 0, NULL, 0, 1);
+
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_unregister_net_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->unregister_net, p,
+ src_id);
+ if (ret)
+ err = NBL_CHAN_RESP_ERR;
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_UNREGISTER_NET, msg_id, err,
+ NULL, 0);
+ ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_ack);
+ if (ret)
+ dev_err(dev,
+ "channel send ack failed with ret: %d, msg_type: %d, src_id:%d\n",
+ ret, NBL_CHAN_MSG_UNREGISTER_NET, src_id);
+}
+
+static int nbl_disp_chan_alloc_txrx_queues_req(void *priv, u16 vsi_id,
+ u16 queue_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_alloc_txrx_queues param = { 0 };
+ struct nbl_chan_param_alloc_txrx_queues result = { 0 };
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ param.vsi_id = vsi_id;
+ param.queue_num = queue_num;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_ALLOC_TXRX_QUEUES,
+ ¶m, sizeof(param), &result, sizeof(result), 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_alloc_txrx_queues_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_chan_param_alloc_txrx_queues *param;
+ struct nbl_chan_param_alloc_txrx_queues result = {0};
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ param = (struct nbl_chan_param_alloc_txrx_queues *)data;
+ result.queue_num = param->queue_num;
+
+ err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->alloc_txrx_queues, p,
+ param->vsi_id, param->queue_num);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_ALLOC_TXRX_QUEUES, msg_id,
+ err, &result, sizeof(result));
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static void nbl_disp_chan_free_txrx_queues_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_FREE_TXRX_QUEUES,
+ &vsi_id, sizeof(vsi_id), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_free_txrx_queues_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ u16 vsi_id;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ vsi_id = *(u16 *)data;
+
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->free_txrx_queues, p, vsi_id);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_FREE_TXRX_QUEUES, msg_id,
+ err, NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static int nbl_disp_chan_register_vsi2q_req(void *priv, u16 vsi_index,
+ u16 vsi_id, u16 queue_offset,
+ u16 queue_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ struct nbl_chan_param_register_vsi2q param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_index = vsi_index;
+ param.vsi_id = vsi_id;
+ param.queue_offset = queue_offset;
+ param.queue_num = queue_num;
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REGISTER_VSI2Q,
+ ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_register_vsi2q_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_register_vsi2q *param = NULL;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+
+ param = (struct nbl_chan_param_register_vsi2q *)data;
+
+ err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->register_vsi2q, p,
+ param->vsi_index, param->vsi_id,
+ param->queue_offset, param->queue_num);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REGISTER_VSI2Q, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static int nbl_disp_chan_setup_q2vsi_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ struct nbl_chan_send_info chan_send;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_Q2VSI,
+ &vsi_id, sizeof(vsi_id), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_setup_q2vsi_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ u16 vsi_id;
+
+ vsi_id = *(u16 *)data;
+
+ err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_q2vsi, p, vsi_id);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_Q2VSI, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static void nbl_disp_chan_remove_q2vsi_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ struct nbl_chan_send_info chan_send;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REMOVE_Q2VSI,
+ &vsi_id, sizeof(vsi_id), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_remove_q2vsi_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ u16 vsi_id;
+
+ vsi_id = *(u16 *)data;
+
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_q2vsi, p, vsi_id);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_Q2VSI, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static int nbl_disp_chan_setup_rss_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ struct nbl_chan_send_info chan_send;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_RSS,
+ &vsi_id, sizeof(vsi_id), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_setup_rss_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_ack_info chan_ack;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ int err = NBL_CHAN_RESP_OK;
+ u16 vsi_id;
+
+ vsi_id = *(u16 *)data;
+ err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_rss, p, vsi_id);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_RSS, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static void nbl_disp_chan_remove_rss_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ struct nbl_chan_send_info chan_send;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REMOVE_RSS,
+ &vsi_id, sizeof(vsi_id), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_remove_rss_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ u16 vsi_id;
+
+ vsi_id = *(u16 *)data;
+
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_rss, p, vsi_id);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_RSS, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static int nbl_disp_chan_setup_queue_req(void *priv,
+ struct nbl_txrx_queue_param *_param,
+ bool is_tx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_setup_queue param;
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ memcpy(¶m.queue_param, _param, sizeof(param.queue_param));
+ param.is_tx = is_tx;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_QUEUE,
+ ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_setup_queue_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_setup_queue *param;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ param = (struct nbl_chan_param_setup_queue *)data;
+
+ err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_queue, p,
+ ¶m->queue_param, param->is_tx);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_QUEUE, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static void nbl_disp_chan_remove_all_queues_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REMOVE_ALL_QUEUES,
+ &vsi_id, sizeof(vsi_id), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_remove_all_queues_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ u16 vsi_id;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ vsi_id = *(u16 *)data;
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_all_queues, p, vsi_id);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_ALL_QUEUES, msg_id,
+ err, NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static int nbl_disp_chan_cfg_dsch_req(void *priv, u16 vsi_id, bool vld)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ struct nbl_chan_param_cfg_dsch param = { 0 };
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+ param.vld = vld;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_CFG_DSCH, ¶m,
+ sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_cfg_dsch_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_cfg_dsch *param;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ param = (struct nbl_chan_param_cfg_dsch *)data;
+
+ err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->cfg_dsch, p,
+ param->vsi_id, param->vld);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CFG_DSCH, msg_id, err, NULL,
+ 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static int nbl_disp_setup_cqs(void *priv, u16 vsi_id, u16 real_qps,
+ bool rss_indir_set)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_cqs, p, vsi_id,
+ real_qps, rss_indir_set);
+ return ret;
+}
+
+static int nbl_disp_chan_setup_cqs_req(void *priv, u16 vsi_id, u16 real_qps,
+ bool rss_indir_set)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_setup_cqs param = { 0 };
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ param.vsi_id = vsi_id;
+ param.real_qps = real_qps;
+ param.rss_indir_set = rss_indir_set;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_CQS, ¶m,
+ sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_setup_cqs_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_setup_cqs param;
+ struct nbl_chan_ack_info chan_ack;
+ int copy_len;
+ int err = NBL_CHAN_RESP_OK;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ memset(¶m, 0, sizeof(struct nbl_chan_param_setup_cqs));
+ param.rss_indir_set = true;
+ copy_len = data_len < sizeof(struct nbl_chan_param_setup_cqs) ?
+ data_len :
+ sizeof(struct nbl_chan_param_setup_cqs);
+ memcpy(¶m, data, copy_len);
+
+ err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_cqs, p,
+ param.vsi_id, param.real_qps,
+ param.rss_indir_set);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_CQS, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static void nbl_disp_chan_remove_cqs_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REMOVE_CQS,
+ &vsi_id, sizeof(vsi_id), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_remove_cqs_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ u16 vsi_id;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ vsi_id = *(u16 *)data;
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_cqs, p, vsi_id);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_CQS, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static int nbl_disp_set_promisc_mode(void *priv, u16 vsi_id, u16 mode)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->set_promisc_mode, p,
+ vsi_id, mode);
+ return ret;
+}
+
+static int nbl_disp_chan_set_promisc_mode_req(void *priv, u16 vsi_id, u16 mode)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ struct nbl_chan_param_set_promisc_mode param = {0};
+ struct nbl_chan_send_info chan_send = {0};
+
+ param.vsi_id = vsi_id;
+ param.mode = mode;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SET_PROSISC_MODE,
+ ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_set_promisc_mode_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_param_set_promisc_mode *param = NULL;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+
+ param = (struct nbl_chan_param_set_promisc_mode *)data;
+ err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->set_promisc_mode, p,
+ param->vsi_id, param->mode);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SET_PROSISC_MODE, msg_id,
+ err, NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static void nbl_disp_chan_get_rxfh_indir_size_req(void *priv, u16 vsi_id,
+ u32 *rxfh_indir_size)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_send_info chan_send = {0};
+ struct nbl_common_info *common;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf,
+ NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE, &vsi_id, sizeof(vsi_id),
+ rxfh_indir_size, sizeof(u32), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_get_rxfh_indir_size_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_ack_info chan_ack;
+ u32 rxfh_indir_size = 0;
+ int ret = NBL_CHAN_RESP_OK;
+ u16 vsi_id;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ vsi_id = *(u16 *)data;
+ NBL_OPS_CALL(res_ops->get_rxfh_indir_size,
+ (p, vsi_id, &rxfh_indir_size));
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE, msg_id,
+ ret, &rxfh_indir_size, sizeof(u32));
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static void nbl_disp_chan_clear_queues_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_send_info chan_send = { 0 };
+ struct nbl_common_info *common;
+
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_CLEAR_QUEUE,
+ &vsi_id, sizeof(vsi_id), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_clear_queues_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_ack_info chan_ack;
+ u16 *vsi_id = (u16 *)data;
+
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->clear_queues, p, *vsi_id);
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CLEAR_QUEUE, msg_id,
+ NBL_CHAN_RESP_OK, NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static u16 nbl_disp_chan_get_vsi_id_req(void *priv, u16 func_id, u16 type)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ struct nbl_chan_param_get_vsi_id param = {0};
+ struct nbl_chan_param_get_vsi_id result = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.type = type;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_VSI_ID,
+ ¶m, sizeof(param), &result, sizeof(result), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+
+ return result.vsi_id;
+}
+
+static void nbl_disp_chan_get_vsi_id_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_param_get_vsi_id *param;
+ struct nbl_chan_param_get_vsi_id result;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ int ret = 0;
+
+ param = (struct nbl_chan_param_get_vsi_id *)data;
+
+ result.vsi_id =
+ NBL_OPS_CALL_RET(res_ops->get_vsi_id, (p, src_id, param->type));
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_VSI_ID, msg_id, err,
+ &result, sizeof(result));
+ ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_ack);
+ if (ret)
+ dev_err(dev,
+ "channel send ack failed with ret: %d, msg_type: %d\n",
+ ret, NBL_CHAN_MSG_GET_VSI_ID);
+}
+
+static void nbl_disp_chan_get_eth_id_req(void *priv, u16 vsi_id, u8 *eth_mode,
+ u8 *eth_id, u8 *logic_eth_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_get_eth_id param = { 0 };
+ struct nbl_chan_param_get_eth_id result = { 0 };
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_ETH_ID,
+ ¶m, sizeof(param), &result, sizeof(result), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+
+ *eth_mode = result.eth_mode;
+ *eth_id = result.eth_id;
+ *logic_eth_id = result.logic_eth_id;
+}
+
+static void nbl_disp_chan_get_eth_id_resp(void *priv, u16 src_id, u16 msg_id,
+ void *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_param_get_eth_id *param;
+ struct nbl_chan_param_get_eth_id result = { 0 };
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ int ret = 0;
+
+ param = (struct nbl_chan_param_get_eth_id *)data;
+
+ NBL_OPS_CALL(res_ops->get_eth_id,
+ (p, param->vsi_id, &result.eth_mode, &result.eth_id,
+ &result.logic_eth_id));
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_ETH_ID, msg_id, err,
+ &result, sizeof(result));
+ ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_ack);
+ if (ret)
+ dev_err(dev,
+ "channel send ack failed with ret: %d, msg_type: %d\n",
+ ret, NBL_CHAN_MSG_GET_ETH_ID);
+}
+
+static int nbl_disp_alloc_rings(void *priv, struct net_device *netdev,
+ struct nbl_ring_param *ring_param)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_RET(res_ops->alloc_rings, (p, netdev, ring_param));
+ return ret;
+}
+
+static void nbl_disp_remove_rings(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ if (!disp_mgt)
+ return;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->remove_rings,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)));
+}
+
+static dma_addr_t nbl_disp_start_tx_ring(void *priv, u8 ring_index)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ dma_addr_t addr = 0;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ addr = NBL_OPS_CALL_RET(res_ops->start_tx_ring, (p, ring_index));
+ return addr;
+}
+
+static void nbl_disp_stop_tx_ring(void *priv, u8 ring_index)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ if (!disp_mgt)
+ return;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->stop_tx_ring, (p, ring_index));
+}
+
+static dma_addr_t nbl_disp_start_rx_ring(void *priv, u8 ring_index,
+ bool use_napi)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ dma_addr_t addr = 0;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ addr = NBL_OPS_CALL_RET(res_ops->start_rx_ring,
+ (p, ring_index, use_napi));
+
+ return addr;
+}
+
+static void nbl_disp_stop_rx_ring(void *priv, u8 ring_index)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ if (!disp_mgt)
+ return;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->stop_rx_ring, (p, ring_index));
+}
+
+static void nbl_disp_kick_rx_ring(void *priv, u16 index)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->kick_rx_ring, (p, index));
+}
+
+static struct nbl_napi_struct *nbl_disp_get_vector_napi(void *priv, u16 index)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return NBL_OPS_CALL_RET_PTR(res_ops->get_vector_napi, (p, index));
+}
+
+static void nbl_disp_set_vector_info(void *priv, u8 __iomem *irq_enable_base,
+ u32 irq_data, u16 index, bool mask_en)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->set_vector_info,
+ (p, irq_enable_base, irq_data, index, mask_en));
+}
+
+static void nbl_disp_register_vsi_ring(void *priv, u16 vsi_index,
+ u16 ring_offset, u16 ring_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->register_vsi_ring,
+ (p, vsi_index, ring_offset, ring_num));
+}
+
+static void nbl_disp_get_res_pt_ops(void *priv,
+ struct nbl_resource_pt_ops *pt_ops)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->get_resource_pt_ops, (p, pt_ops));
+}
+
+static int
+nbl_disp_register_net(void *priv, struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->register_net, p, 0,
+ register_param, register_result);
+ return ret;
+}
+
+static int nbl_disp_alloc_txrx_queues(void *priv, u16 vsi_id, u16 queue_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->alloc_txrx_queues, p,
+ vsi_id, queue_num);
+ return ret;
+}
+
+static void nbl_disp_free_txrx_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->free_txrx_queues, p, vsi_id);
+}
+
+static int nbl_disp_register_vsi2q(void *priv, u16 vsi_index, u16 vsi_id,
+ u16 queue_offset, u16 queue_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->register_vsi2q, p,
+ vsi_index, vsi_id, queue_offset,
+ queue_num);
+}
+
+static int nbl_disp_setup_q2vsi(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_q2vsi, p, vsi_id);
+}
+
+static void nbl_disp_remove_q2vsi(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_q2vsi, p, vsi_id);
+}
+
+static int nbl_disp_setup_rss(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_rss, p, vsi_id);
+}
+
+static void nbl_disp_remove_rss(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_rss, p, vsi_id);
+}
+
+static int nbl_disp_setup_queue(void *priv, struct nbl_txrx_queue_param *param,
+ bool is_tx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_queue, p, param,
+ is_tx);
+ return ret;
+}
+
+static void nbl_disp_remove_all_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_all_queues, p, vsi_id);
+}
+
+static int nbl_disp_cfg_dsch(void *priv, u16 vsi_id, bool vld)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->cfg_dsch, p, vsi_id,
+ vld);
+ return ret;
+}
+
+static void nbl_disp_remove_cqs(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_cqs, p, vsi_id);
+}
+
+static u8 __iomem *
+nbl_disp_get_msix_irq_enable_info(void *priv, u16 global_vec_id, u32 *irq_data)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ if (!disp_mgt)
+ return NULL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return NBL_OPS_CALL_RET_PTR(res_ops->get_msix_irq_enable_info,
+ (p, global_vec_id, irq_data));
+}
+
+static void nbl_disp_get_net_stats(void *priv, struct nbl_stats *net_stats)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->get_net_stats, (p, net_stats));
+}
+
+static void nbl_disp_get_rxfh_indir_size(void *priv, u16 vsi_id,
+ u32 *rxfh_indir_size)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->get_rxfh_indir_size,
+ (p, vsi_id, rxfh_indir_size));
+}
+
+static int nbl_disp_set_rxfh_indir(void *priv, u16 vsi_id, const u32 *indir,
+ u32 indir_size)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_RET(res_ops->set_rxfh_indir,
+ (p, vsi_id, indir, indir_size));
+ return ret;
+}
+
+static int nbl_disp_chan_set_rxfh_indir_req(void *priv, u16 vsi_id,
+ const u32 *indir, u32 indir_size)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_chan_param_set_rxfh_indir *param = NULL;
+ struct nbl_chan_send_info chan_send = {0};
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_common_info *common;
+ int ret = 0;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ param = kzalloc(sizeof(*param), GFP_KERNEL);
+ if (!param)
+ return -ENOMEM;
+
+ param->vsi_id = vsi_id;
+ param->indir_size = indir_size;
+ memcpy(param->indir, indir, indir_size * sizeof(param->indir[0]));
+
+ NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common),
+ NBL_CHAN_MSG_SET_RXFH_INDIR, param, sizeof(*param), NULL,
+ 0, 1);
+ ret = chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+ kfree(param);
+ return ret;
+}
+
+static void nbl_disp_chan_set_rxfh_indir_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_set_rxfh_indir *param;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ param = (struct nbl_chan_param_set_rxfh_indir *)data;
+
+ err = NBL_OPS_CALL_RET(res_ops->set_rxfh_indir,
+ (p, param->vsi_id, param->indir,
+ param->indir_size));
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SET_RXFH_INDIR, msg_id, err,
+ NULL, 0);
+ chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack);
+}
+
+static void nbl_disp_deinit_chip_module(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->deinit_chip_module,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)));
+}
+
+static int nbl_disp_init_chip_module(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+ int ret;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_RET(res_ops->init_chip_module,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)));
+ return ret;
+}
+
+static int nbl_disp_queue_init(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+ int ret;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_RET(res_ops->queue_init,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)));
+ return ret;
+}
+
+static int nbl_disp_configure_msix_map(void *priv, u16 num_net_msix,
+ u16 num_others_msix,
+ bool net_msix_mask_en)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->configure_msix_map, p, 0,
+ num_net_msix, num_others_msix,
+ net_msix_mask_en);
+ return ret;
+}
+
+static int nbl_disp_chan_configure_msix_map_req(void *priv, u16 num_net_msix,
+ u16 num_others_msix,
+ bool net_msix_mask_en)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_chan_param_cfg_msix_map param = {0};
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ param.num_net_msix = num_net_msix;
+ param.num_others_msix = num_others_msix;
+ param.msix_mask_en = net_msix_mask_en;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf,
+ NBL_CHAN_MSG_CONFIGURE_MSIX_MAP, ¶m, sizeof(param),
+ NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_configure_msix_map_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_cfg_msix_map *param;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ int ret;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ param = (struct nbl_chan_param_cfg_msix_map *)data;
+
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->configure_msix_map, p,
+ src_id, param->num_net_msix,
+ param->num_others_msix,
+ param->msix_mask_en);
+ if (ret)
+ err = NBL_CHAN_RESP_ERR;
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CONFIGURE_MSIX_MAP, msg_id,
+ err, NULL, 0);
+ ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_ack);
+ if (ret)
+ dev_err(dev,
+ "channel send ack failed with ret: %d, msg_type: %d\n",
+ ret, NBL_CHAN_MSG_CONFIGURE_MSIX_MAP);
+}
+
+static int nbl_disp_chan_destroy_msix_map_req(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_DESTROY_MSIX_MAP,
+ NULL, 0, NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_destroy_msix_map_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ int ret;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->destroy_msix_map, p,
+ src_id);
+ if (ret)
+ err = NBL_CHAN_RESP_ERR;
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_DESTROY_MSIX_MAP, msg_id,
+ err, NULL, 0);
+ ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_ack);
+ if (ret)
+ dev_err(dev,
+ "channel send ack failed with ret: %d, msg_type: %d\n",
+ ret, NBL_CHAN_MSG_DESTROY_MSIX_MAP);
+}
+
+static int nbl_disp_chan_enable_mailbox_irq_req(void *priv, u16 vector_id,
+ bool enable_msix)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_enable_mailbox_irq param = { 0 };
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ param.vector_id = vector_id;
+ param.enable_msix = enable_msix;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf,
+ NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ, ¶m, sizeof(param),
+ NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static void nbl_disp_chan_enable_mailbox_irq_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_enable_mailbox_irq *param;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ int ret;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ param = (struct nbl_chan_param_enable_mailbox_irq *)data;
+
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->enable_mailbox_irq, p,
+ src_id, param->vector_id,
+ param->enable_msix);
+ if (ret)
+ err = NBL_CHAN_RESP_ERR;
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ, msg_id,
+ err, NULL, 0);
+ ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_ack);
+ if (ret)
+ dev_err(dev,
+ "channel send ack failed with ret: %d, msg_type: %d\n",
+ ret, NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ);
+}
+
+static u16 nbl_disp_chan_get_global_vector_req(void *priv, u16 vsi_id,
+ u16 local_vec_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_get_global_vector param = { 0 };
+ struct nbl_chan_param_get_global_vector result = { 0 };
+ struct nbl_chan_send_info chan_send;
+ struct nbl_common_info *common;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+
+ param.vsi_id = vsi_id;
+ param.vector_id = local_vec_id;
+
+ NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_GLOBAL_VECTOR,
+ ¶m, sizeof(param), &result, sizeof(result), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+
+ return result.vector_id;
+}
+
+static void nbl_disp_chan_get_global_vector_resp(void *priv, u16 src_id,
+ u16 msg_id, void *data,
+ u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_get_global_vector *param;
+ struct nbl_chan_param_get_global_vector result;
+ struct nbl_chan_ack_info chan_ack;
+ int err = NBL_CHAN_RESP_OK;
+ int ret;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ param = (struct nbl_chan_param_get_global_vector *)data;
+
+ result.vector_id =
+ NBL_OPS_CALL_RET(res_ops->get_global_vector,
+ (p, param->vsi_id, param->vector_id));
+
+ NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_GLOBAL_VECTOR, msg_id,
+ err, &result, sizeof(result));
+ ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_ack);
+ if (ret)
+ dev_err(dev,
+ "channel send ack failed with ret: %d, msg_type: %d\n",
+ ret, NBL_CHAN_MSG_GET_GLOBAL_VECTOR);
+}
+
+static int nbl_disp_destroy_msix_map(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->destroy_msix_map, p, 0);
+ return ret;
+}
+
+static int nbl_disp_enable_mailbox_irq(void *priv, u16 vector_id,
+ bool enable_msix)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ int ret;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->enable_mailbox_irq, p, 0,
+ vector_id, enable_msix);
+ return ret;
+}
+
+static u16 nbl_disp_get_global_vector(void *priv, u16 vsi_id, u16 local_vec_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ u16 ret;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_RET(res_ops->get_global_vector,
+ (p, vsi_id, local_vec_id));
+ return ret;
+}
+
+static u16 nbl_disp_get_msix_entry_id(void *priv, u16 vsi_id, u16 local_vec_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+ u16 ret;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL_RET(res_ops->get_msix_entry_id,
+ (p, vsi_id, local_vec_id));
+ return ret;
+}
+
+static u16 nbl_disp_get_vsi_id(void *priv, u16 func_id, u16 type)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+ struct nbl_resource_ops *res_ops;
+
+ if (!disp_mgt)
+ return -EINVAL;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return NBL_OPS_CALL_RET(res_ops->get_vsi_id, (p, func_id, type));
+}
+
+static void nbl_disp_get_eth_id(void *priv, u16 vsi_id, u8 *eth_mode,
+ u8 *eth_id, u8 *logic_eth_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->get_eth_id,
+ (p, vsi_id, eth_mode, eth_id, logic_eth_id));
+}
+
+static u32 nbl_disp_get_tx_headroom(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ u32 ret;
+
+ ret = NBL_OPS_CALL_RET(res_ops->get_tx_headroom,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)));
+ return ret;
+}
+
+static void nbl_disp_clear_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt);
+
+ NBL_OPS_CALL_LOCK(disp_mgt, res_ops->clear_queues, p, vsi_id);
+}
+
+/* NBL_DISP_SET_OPS(disp_op_name, func, ctrl_lvl, msg_type, msg_req, msg_resp)
+ * ctrl_lvl is to define when this disp_op should go directly to res_op,
+ * not sending a channel msg.
+ * Use X Macros to reduce codes in channel_op and disp_op setup/remove
+ */
+#define NBL_DISP_OPS_TBL \
+do { \
+ NBL_DISP_SET_OPS(init_chip_module, nbl_disp_init_chip_module, \
+ NBL_DISP_CTRL_LVL_MGT, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(deinit_chip_module, \
+ nbl_disp_deinit_chip_module, \
+ NBL_DISP_CTRL_LVL_MGT, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_resource_pt_ops, nbl_disp_get_res_pt_ops, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(queue_init, nbl_disp_queue_init, \
+ NBL_DISP_CTRL_LVL_MGT, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(configure_msix_map, \
+ nbl_disp_configure_msix_map, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_CONFIGURE_MSIX_MAP, \
+ nbl_disp_chan_configure_msix_map_req, \
+ nbl_disp_chan_configure_msix_map_resp); \
+ NBL_DISP_SET_OPS(destroy_msix_map, nbl_disp_destroy_msix_map, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_DESTROY_MSIX_MAP, \
+ nbl_disp_chan_destroy_msix_map_req, \
+ nbl_disp_chan_destroy_msix_map_resp); \
+ NBL_DISP_SET_OPS(enable_mailbox_irq, \
+ nbl_disp_enable_mailbox_irq, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ, \
+ nbl_disp_chan_enable_mailbox_irq_req, \
+ nbl_disp_chan_enable_mailbox_irq_resp); \
+ NBL_DISP_SET_OPS(get_global_vector, nbl_disp_get_global_vector, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_GET_GLOBAL_VECTOR, \
+ nbl_disp_chan_get_global_vector_req, \
+ nbl_disp_chan_get_global_vector_resp); \
+ NBL_DISP_SET_OPS(get_msix_entry_id, nbl_disp_get_msix_entry_id, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(alloc_rings, nbl_disp_alloc_rings, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(remove_rings, nbl_disp_remove_rings, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(start_tx_ring, nbl_disp_start_tx_ring, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(stop_tx_ring, nbl_disp_stop_tx_ring, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(start_rx_ring, nbl_disp_start_rx_ring, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(stop_rx_ring, nbl_disp_stop_rx_ring, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(kick_rx_ring, nbl_disp_kick_rx_ring, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_vector_napi, nbl_disp_get_vector_napi, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(set_vector_info, nbl_disp_set_vector_info, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(register_vsi_ring, nbl_disp_register_vsi_ring, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(register_net, nbl_disp_register_net, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_REGISTER_NET, \
+ nbl_disp_chan_register_net_req, \
+ nbl_disp_chan_register_net_resp); \
+ NBL_DISP_SET_OPS(unregister_net, nbl_disp_unregister_net, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_UNREGISTER_NET, \
+ nbl_disp_chan_unregister_net_req, \
+ nbl_disp_chan_unregister_net_resp); \
+ NBL_DISP_SET_OPS(alloc_txrx_queues, nbl_disp_alloc_txrx_queues, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_ALLOC_TXRX_QUEUES, \
+ nbl_disp_chan_alloc_txrx_queues_req, \
+ nbl_disp_chan_alloc_txrx_queues_resp); \
+ NBL_DISP_SET_OPS(free_txrx_queues, nbl_disp_free_txrx_queues, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_FREE_TXRX_QUEUES, \
+ nbl_disp_chan_free_txrx_queues_req, \
+ nbl_disp_chan_free_txrx_queues_resp); \
+ NBL_DISP_SET_OPS(register_vsi2q, nbl_disp_register_vsi2q, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_REGISTER_VSI2Q, \
+ nbl_disp_chan_register_vsi2q_req, \
+ nbl_disp_chan_register_vsi2q_resp); \
+ NBL_DISP_SET_OPS(setup_q2vsi, nbl_disp_setup_q2vsi, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_Q2VSI,\
+ nbl_disp_chan_setup_q2vsi_req, \
+ nbl_disp_chan_setup_q2vsi_resp); \
+ NBL_DISP_SET_OPS(remove_q2vsi, nbl_disp_remove_q2vsi, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_Q2VSI,\
+ nbl_disp_chan_remove_q2vsi_req, \
+ nbl_disp_chan_remove_q2vsi_resp); \
+ NBL_DISP_SET_OPS(setup_rss, nbl_disp_setup_rss, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_RSS, \
+ nbl_disp_chan_setup_rss_req, \
+ nbl_disp_chan_setup_rss_resp); \
+ NBL_DISP_SET_OPS(remove_rss, nbl_disp_remove_rss, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_RSS,\
+ nbl_disp_chan_remove_rss_req, \
+ nbl_disp_chan_remove_rss_resp); \
+ NBL_DISP_SET_OPS(setup_queue, nbl_disp_setup_queue, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_QUEUE,\
+ nbl_disp_chan_setup_queue_req, \
+ nbl_disp_chan_setup_queue_resp); \
+ NBL_DISP_SET_OPS(remove_all_queues, nbl_disp_remove_all_queues, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_REMOVE_ALL_QUEUES, \
+ nbl_disp_chan_remove_all_queues_req, \
+ nbl_disp_chan_remove_all_queues_resp); \
+ NBL_DISP_SET_OPS(cfg_dsch, nbl_disp_cfg_dsch, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_CFG_DSCH, \
+ nbl_disp_chan_cfg_dsch_req, \
+ nbl_disp_chan_cfg_dsch_resp); \
+ NBL_DISP_SET_OPS(setup_cqs, nbl_disp_setup_cqs, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_CQS, \
+ nbl_disp_chan_setup_cqs_req, \
+ nbl_disp_chan_setup_cqs_resp); \
+ NBL_DISP_SET_OPS(remove_cqs, nbl_disp_remove_cqs, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_CQS,\
+ nbl_disp_chan_remove_cqs_req, \
+ nbl_disp_chan_remove_cqs_resp); \
+ NBL_DISP_SET_OPS(get_msix_irq_enable_info, \
+ nbl_disp_get_msix_irq_enable_info, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_vsi_id, nbl_disp_get_vsi_id, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_VSI_ID,\
+ nbl_disp_chan_get_vsi_id_req, \
+ nbl_disp_chan_get_vsi_id_resp); \
+ NBL_DISP_SET_OPS(get_eth_id, nbl_disp_get_eth_id, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_ETH_ID,\
+ nbl_disp_chan_get_eth_id_req, \
+ nbl_disp_chan_get_eth_id_resp); \
+ NBL_DISP_SET_OPS(set_promisc_mode, nbl_disp_set_promisc_mode, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SET_PROSISC_MODE,\
+ nbl_disp_chan_set_promisc_mode_req, \
+ nbl_disp_chan_set_promisc_mode_resp); \
+ NBL_DISP_SET_OPS(get_tx_headroom, nbl_disp_get_tx_headroom, \
+ NBL_DISP_CTRL_LVL_NET, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_net_stats, nbl_disp_get_net_stats, \
+ NBL_DISP_CTRL_LVL_NET, -1, NULL, NULL); \
+ NBL_DISP_SET_OPS(get_rxfh_indir_size, nbl_disp_get_rxfh_indir_size,\
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE,\
+ nbl_disp_chan_get_rxfh_indir_size_req, \
+ nbl_disp_chan_get_rxfh_indir_size_resp); \
+ NBL_DISP_SET_OPS(set_rxfh_indir, nbl_disp_set_rxfh_indir, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SET_RXFH_INDIR,\
+ nbl_disp_chan_set_rxfh_indir_req, \
+ nbl_disp_chan_set_rxfh_indir_resp); \
+ NBL_DISP_SET_OPS(clear_queues, nbl_disp_clear_queues, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_CLEAR_QUEUE,\
+ nbl_disp_chan_clear_queues_req, \
+ nbl_disp_chan_clear_queues_resp); \
+} while (0)
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_disp_setup_msg(struct nbl_dispatch_mgt *disp_mgt)
+{
+ struct nbl_dispatch_ops *disp_ops = NBL_DISP_MGT_TO_DISP_OPS(disp_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ void *p = NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt);
+ int ret = 0;
+
+ if (!chan_ops->check_queue_exist(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ NBL_CHAN_TYPE_MAILBOX))
+ return 0;
+
+ mutex_init(&disp_mgt->ops_mutex_lock);
+ disp_mgt->ops_lock_required = true;
+
+#define NBL_DISP_SET_OPS(disp_op, func, ctrl, msg_type, msg_req, resp) \
+do { \
+ typeof(msg_type) _msg_type = (msg_type); \
+ typeof(ctrl) _ctrl_lvl = (ctrl); \
+ (void)(disp_ops->NBL_NAME(disp_op)); \
+ (void)(func); \
+ (void)(msg_req); \
+ (void)_ctrl_lvl; \
+ if (_msg_type >= 0) \
+ ret += chan_ops->register_msg(p, _msg_type, resp, disp_mgt);\
+} while (0)
+ NBL_DISP_OPS_TBL;
+#undef NBL_DISP_SET_OPS
+
+ return ret;
+}
+
+/* Ctrl lvl means that if a certain level is set, then all disp_ops that
+ * decleared this lvl will go directly to res_ops, rather than send a
+ * channel msg, and vice versa.
+ */
+static int nbl_disp_setup_ctrl_lvl(struct nbl_dispatch_mgt *disp_mgt, u32 lvl)
+{
+ struct nbl_dispatch_ops *disp_ops;
+
+ disp_ops = NBL_DISP_MGT_TO_DISP_OPS(disp_mgt);
+
+ set_bit(lvl, disp_mgt->ctrl_lvl);
+
+#define NBL_DISP_SET_OPS(disp_op, func, ctrl, msg_type, msg_req, msg_resp) \
+do { \
+ typeof(msg_type) _msg_type = (msg_type); \
+ (void)(_msg_type); \
+ (void)(msg_resp); \
+ disp_ops->NBL_NAME(disp_op) = \
+ test_bit(ctrl, disp_mgt->ctrl_lvl) ? func : msg_req; ;\
+} while (0)
+ NBL_DISP_OPS_TBL;
+#undef NBL_DISP_SET_OPS
+
+ return 0;
+}
+
static int nbl_disp_setup_disp_mgt(struct nbl_common_info *common,
struct nbl_dispatch_mgt **disp_mgt)
{
@@ -88,7 +1928,30 @@ int nbl_disp_init(void *p, struct nbl_init_param *param)
NBL_DISP_MGT_TO_CHAN_OPS_TBL(*disp_mgt) = chan_ops_tbl;
NBL_DISP_MGT_TO_DISP_OPS_TBL(*disp_mgt) = *disp_ops_tbl;
+ ret = nbl_disp_setup_msg(*disp_mgt);
+ if (ret)
+ goto setup_msg_fail;
+
+ if (param->caps.has_ctrl) {
+ ret = nbl_disp_setup_ctrl_lvl(*disp_mgt, NBL_DISP_CTRL_LVL_MGT);
+ if (ret)
+ goto setup_msg_fail;
+ }
+
+ if (param->caps.has_net) {
+ ret = nbl_disp_setup_ctrl_lvl(*disp_mgt, NBL_DISP_CTRL_LVL_NET);
+ if (ret)
+ goto setup_msg_fail;
+ }
+
+ ret = nbl_disp_setup_ctrl_lvl(*disp_mgt, NBL_DISP_CTRL_LVL_ALWAYS);
+ if (ret)
+ goto setup_msg_fail;
+
return 0;
+
+setup_msg_fail:
+ nbl_disp_remove_ops(dev, disp_ops_tbl);
setup_ops_fail:
nbl_disp_remove_disp_mgt(common, disp_mgt);
setup_mgt_fail:
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h
index 21cfb5a1000e..4670f22c44e2 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h
@@ -29,6 +29,39 @@
#define NBL_DISP_MGT_TO_DISP_PRIV(disp_mgt) \
(NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->priv)
+#define NBL_OPS_CALL_LOCK(disp_mgt, func, ...) \
+do { \
+ typeof(disp_mgt) _disp_mgt = (disp_mgt); \
+ typeof(func) _func = (func); \
+ \
+ if (_disp_mgt->ops_lock_required) \
+ mutex_lock(&_disp_mgt->ops_mutex_lock); \
+ \
+ if (_func) \
+ _func(__VA_ARGS__); \
+ \
+ if (_disp_mgt->ops_lock_required) \
+ mutex_unlock(&_disp_mgt->ops_mutex_lock); \
+} while (0)
+
+#define NBL_OPS_CALL_LOCK_RET(disp_mgt, func, ...) \
+({ \
+ typeof(disp_mgt) _disp_mgt = (disp_mgt); \
+ typeof(func) _func = (func); \
+ typeof(_func(__VA_ARGS__)) _ret = 0; \
+ \
+ if (_disp_mgt->ops_lock_required) \
+ mutex_lock(&_disp_mgt->ops_mutex_lock); \
+ \
+ if (_func) \
+ _ret = _func(__VA_ARGS__); \
+ \
+ if (_disp_mgt->ops_lock_required) \
+ mutex_unlock(&_disp_mgt->ops_mutex_lock); \
+ \
+ _ret; \
+})
+
struct nbl_dispatch_mgt {
struct nbl_common_info *common;
struct nbl_resource_ops_tbl *res_ops_tbl;
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h
index 2f6f1ffedafc..9985b0330020 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h
@@ -21,6 +21,64 @@ enum {
};
struct nbl_dispatch_ops {
+ int (*init_chip_module)(void *priv);
+ void (*deinit_chip_module)(void *priv);
+ void (*get_resource_pt_ops)(void *priv,
+ struct nbl_resource_pt_ops *pt_ops);
+ int (*queue_init)(void *priv);
+ int (*configure_msix_map)(void *priv, u16 num_net_msix,
+ u16 num_others_msix, bool net_msix_mask_en);
+ int (*destroy_msix_map)(void *priv);
+ int (*enable_mailbox_irq)(void *p, u16 vector_id, bool enable_msix);
+ u16 (*get_global_vector)(void *priv, u16 vsi_id, u16 local_vec_id);
+ u16 (*get_msix_entry_id)(void *priv, u16 vsi_id, u16 local_vec_id);
+
+ int (*alloc_rings)(void *priv, struct net_device *netdev,
+ struct nbl_ring_param *param);
+ void (*remove_rings)(void *priv);
+ dma_addr_t (*start_tx_ring)(void *priv, u8 ring_index);
+ void (*stop_tx_ring)(void *priv, u8 ring_index);
+ dma_addr_t (*start_rx_ring)(void *priv, u8 ring_index, bool use_napi);
+ void (*stop_rx_ring)(void *priv, u8 ring_index);
+ void (*kick_rx_ring)(void *priv, u16 index);
+ struct nbl_napi_struct *(*get_vector_napi)(void *priv, u16 index);
+ void (*set_vector_info)(void *priv, u8 __iomem *irq_enable_base,
+ u32 irq_data, u16 index, bool mask_en);
+ int (*register_net)(void *priv,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result);
+ void (*register_vsi_ring)(void *priv, u16 vsi_index, u16 ring_offset,
+ u16 ring_num);
+ int (*unregister_net)(void *priv);
+ int (*alloc_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num);
+ void (*free_txrx_queues)(void *priv, u16 vsi_id);
+ int (*setup_queue)(void *priv, struct nbl_txrx_queue_param *param,
+ bool is_tx);
+ void (*remove_all_queues)(void *priv, u16 vsi_id);
+ int (*register_vsi2q)(void *priv, u16 vsi_index, u16 vsi_id,
+ u16 queue_offset, u16 queue_num);
+ int (*setup_q2vsi)(void *priv, u16 vsi_id);
+ void (*remove_q2vsi)(void *priv, u16 vsi_id);
+ int (*setup_rss)(void *priv, u16 vsi_id);
+ void (*remove_rss)(void *priv, u16 vsi_id);
+ int (*cfg_dsch)(void *priv, u16 vsi_id, bool vld);
+ int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps,
+ bool rss_indir_set);
+ void (*remove_cqs)(void *priv, u16 vsi_id);
+
+ void (*clear_queues)(void *priv, u16 vsi_id);
+ u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id,
+ u32 *irq_data);
+ u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type);
+ void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id,
+ u8 *logic_eth_id);
+ int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode);
+ u32 (*get_tx_headroom)(void *priv);
+ void (*get_net_stats)(void *priv, struct nbl_stats *queue_stats);
+ void (*get_rxfh_indir_size)(void *priv, u16 vsi_id,
+ u32 *rxfh_indir_size);
+ int (*set_rxfh_indir)(void *priv, u16 vsi_id, const u32 *indir,
+ u32 indir_size);
};
struct nbl_dispatch_ops_tbl {
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 13/15] net/nebula-matrix: add net dev init/reinit operation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (10 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 11/15] net/nebula-matrix: add Dispatch layer implementation illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 14/15] net/nebula-matrix: add common dev start/stop operation illusion.wang
` (3 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
net dev init:vsi build, register vsi, register net irq
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../nebula-matrix/nbl/nbl_core/nbl_dev.c | 315 ++++++++++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_service.c | 81 +++++
2 files changed, 396 insertions(+)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
index b661d8ab972c..2d34ab101144 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
@@ -20,6 +20,16 @@ static void nbl_dev_register_common_irq(struct nbl_dev_mgt *dev_mgt)
msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num = 1;
}
+static void nbl_dev_register_net_irq(struct nbl_dev_mgt *dev_mgt, u16 queue_num)
+{
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+
+ msix_info->serv_info[NBL_MSIX_NET_TYPE].num = queue_num;
+ msix_info->serv_info[NBL_MSIX_NET_TYPE].hw_self_mask_en = 1;
+}
+
/* ---------- Channel config ---------- */
static int nbl_dev_setup_chan_qinfo(struct nbl_dev_mgt *dev_mgt, u8 chan_type)
{
@@ -185,6 +195,303 @@ static void nbl_dev_remove_ctrl_dev(struct nbl_adapter *adapter)
serv_ops->destroy_chip(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
}
+static const struct net_device_ops netdev_ops_leonis_pf = {
+};
+
+static int nbl_dev_setup_netops_leonis(void *priv, struct net_device *netdev,
+ struct nbl_init_param *param)
+{
+ struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)priv;
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ netdev->netdev_ops = &netdev_ops_leonis_pf;
+ serv_ops->set_netdev_ops(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ &netdev_ops_leonis_pf, true);
+ return 0;
+}
+
+static int nbl_dev_register_net(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_register_net_result *register_result)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_register_net_param register_param = {0};
+ int ret;
+
+ ret = serv_ops->register_net(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ ®ister_param, register_result);
+
+ if (!register_result->tx_queue_num || !register_result->rx_queue_num)
+ return -EIO;
+
+ return ret;
+}
+
+static void nbl_dev_unregister_net(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt);
+ int ret;
+
+ ret = serv_ops->unregister_net(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+ if (ret)
+ dev_err(dev, "unregister net failed\n");
+}
+
+static u16 nbl_dev_vsi_alloc_queue(struct nbl_dev_net *net_dev, u16 queue_num)
+{
+ struct nbl_dev_vsi_controller *vsi_ctrl = &net_dev->vsi_ctrl;
+ u16 queue_offset = 0;
+
+ if (vsi_ctrl->queue_free_offset + queue_num > net_dev->kernel_queue_num)
+ return -ENOSPC;
+
+ queue_offset = vsi_ctrl->queue_free_offset;
+ vsi_ctrl->queue_free_offset += queue_num;
+
+ return queue_offset;
+}
+
+static int nbl_dev_vsi_common_setup(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param,
+ struct nbl_dev_vsi *vsi)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct nbl_vsi_param vsi_param = { 0 };
+ int ret;
+
+ vsi->queue_offset = nbl_dev_vsi_alloc_queue(net_dev,
+ vsi->queue_num);
+ vsi_param.index = vsi->index;
+ vsi_param.vsi_id = vsi->vsi_id;
+ vsi_param.queue_offset = vsi->queue_offset;
+ vsi_param.queue_num = vsi->queue_num;
+
+ /* Tell serv & res layer the mapping from vsi to queue_id */
+ ret = serv_ops->register_vsi_info(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ &vsi_param);
+ return ret;
+}
+
+static void nbl_dev_vsi_common_remove(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_dev_vsi *vsi)
+{
+}
+
+static int nbl_dev_vsi_data_register(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param,
+ void *vsi_data)
+{
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data;
+ int ret;
+
+ ret = nbl_dev_register_net(dev_mgt, &vsi->register_result);
+ if (ret)
+ return ret;
+
+ vsi->queue_num = vsi->register_result.tx_queue_num;
+ vsi->queue_size = vsi->register_result.queue_size;
+
+ nbl_debug(common, "Data vsi register, queue_num %d, queue_size %d",
+ vsi->queue_num, vsi->queue_size);
+
+ return 0;
+}
+
+static int nbl_dev_vsi_data_setup(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param, void *vsi_data)
+{
+ struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data;
+
+ return nbl_dev_vsi_common_setup(dev_mgt, param, vsi);
+}
+
+static void nbl_dev_vsi_data_remove(struct nbl_dev_mgt *dev_mgt, void *vsi_data)
+{
+ struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data;
+
+ nbl_dev_vsi_common_remove(dev_mgt, vsi);
+}
+
+static struct nbl_dev_vsi_tbl vsi_tbl[NBL_VSI_MAX] = {
+ [NBL_VSI_DATA] = {
+ .vsi_ops = {
+ .register_vsi = nbl_dev_vsi_data_register,
+ .setup = nbl_dev_vsi_data_setup,
+ .remove = nbl_dev_vsi_data_remove,
+ },
+ },
+};
+
+static int nbl_dev_vsi_build(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param)
+{
+ struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt);
+ struct nbl_dev_vsi *vsi = NULL;
+ int i;
+
+ net_dev->vsi_ctrl.queue_num = 0;
+ net_dev->vsi_ctrl.queue_free_offset = 0;
+
+ /* Build all vsi, and alloc vsi_id for each of them */
+ for (i = 0; i < NBL_VSI_MAX; i++) {
+ vsi = devm_kzalloc(NBL_DEV_MGT_TO_DEV(dev_mgt), sizeof(*vsi),
+ GFP_KERNEL);
+ if (!vsi)
+ goto malloc_vsi_fail;
+
+ vsi->ops = &vsi_tbl[i].vsi_ops;
+ vsi->vsi_id = serv_ops->get_vsi_id(priv, 0, i);
+ vsi->index = i;
+
+ net_dev->vsi_ctrl.vsi_list[i] = vsi;
+ }
+
+ return 0;
+
+malloc_vsi_fail:
+ while (--i + 1) {
+ devm_kfree(NBL_DEV_MGT_TO_DEV(dev_mgt),
+ net_dev->vsi_ctrl.vsi_list[i]);
+ net_dev->vsi_ctrl.vsi_list[i] = NULL;
+ }
+
+ return -ENOMEM;
+}
+
+static void nbl_dev_vsi_destroy(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ int i;
+
+ for (i = 0; i < NBL_VSI_MAX; i++)
+ if (net_dev->vsi_ctrl.vsi_list[i]) {
+ devm_kfree(NBL_DEV_MGT_TO_DEV(dev_mgt),
+ net_dev->vsi_ctrl.vsi_list[i]);
+ net_dev->vsi_ctrl.vsi_list[i] = NULL;
+ }
+}
+
+struct nbl_dev_vsi *nbl_dev_vsi_select(struct nbl_dev_mgt *dev_mgt,
+ u8 vsi_index)
+{
+ struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct nbl_dev_vsi *vsi = NULL;
+ int i = 0;
+
+ for (i = 0; i < NBL_VSI_MAX; i++) {
+ vsi = net_dev->vsi_ctrl.vsi_list[i];
+ if (vsi && vsi->index == vsi_index)
+ return vsi;
+ }
+
+ return NULL;
+}
+
+static struct nbl_dev_net_ops netdev_ops[NBL_PRODUCT_MAX] = {
+ {
+ .setup_netdev_ops = nbl_dev_setup_netops_leonis,
+ },
+};
+
+static void nbl_det_setup_net_dev_ops(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param)
+{
+ NBL_DEV_MGT_TO_NETDEV_OPS(dev_mgt) = &netdev_ops[param->product_type];
+}
+
+static int nbl_dev_setup_net_dev(struct nbl_adapter *adapter,
+ struct nbl_init_param *param)
+{
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_dev_net **net_dev = &NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct device *dev = NBL_ADAP_TO_DEV(adapter);
+ struct nbl_dev_vsi *vsi;
+ u16 kernel_queue_num = 0;
+ int i, ret;
+
+ *net_dev = devm_kzalloc(dev, sizeof(struct nbl_dev_net), GFP_KERNEL);
+ if (!*net_dev)
+ return -ENOMEM;
+
+ ret = nbl_dev_vsi_build(dev_mgt, param);
+ if (ret)
+ goto vsi_build_fail;
+
+ for (i = 0; i < NBL_VSI_MAX; i++) {
+ vsi = (*net_dev)->vsi_ctrl.vsi_list[i];
+
+ if (!vsi)
+ continue;
+
+ ret = vsi->ops->register_vsi(dev_mgt, param, vsi);
+ if (ret) {
+ dev_err(NBL_DEV_MGT_TO_DEV(dev_mgt),
+ "Vsi %d register failed", vsi->index);
+ goto vsi_register_fail;
+ }
+ kernel_queue_num += vsi->queue_num;
+ }
+
+ (*net_dev)->kernel_queue_num = kernel_queue_num;
+
+ for (i = 0; i < NBL_VSI_MAX; i++) {
+ vsi = (*net_dev)->vsi_ctrl.vsi_list[i];
+
+ if (!vsi)
+ continue;
+ ret = vsi->ops->setup(dev_mgt, param, vsi);
+ if (ret) {
+ dev_err(NBL_DEV_MGT_TO_DEV(dev_mgt),
+ "Vsi %d setup failed", vsi->index);
+ goto vsi_setup_fail;
+ }
+ }
+
+ nbl_dev_register_net_irq(dev_mgt, kernel_queue_num);
+
+ nbl_det_setup_net_dev_ops(dev_mgt, param);
+
+ return 0;
+
+vsi_setup_fail:
+vsi_register_fail:
+ nbl_dev_vsi_destroy(dev_mgt);
+vsi_build_fail:
+ devm_kfree(dev, *net_dev);
+ return ret;
+}
+
+static void nbl_dev_remove_net_dev(struct nbl_adapter *adapter)
+{
+ struct device *dev = NBL_ADAP_TO_DEV(adapter);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_dev_net **net_dev = &NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct nbl_dev_vsi *vsi;
+ int i;
+
+ if (!*net_dev)
+ return;
+
+ for (i = 0; i < NBL_VSI_MAX; i++) {
+ vsi = (*net_dev)->vsi_ctrl.vsi_list[i];
+
+ if (!vsi)
+ continue;
+
+ vsi->ops->remove(dev_mgt, vsi);
+ }
+ nbl_dev_vsi_destroy(dev_mgt);
+
+ nbl_dev_unregister_net(dev_mgt);
+
+ devm_kfree(dev, *net_dev);
+ *net_dev = NULL;
+}
+
static int nbl_dev_setup_dev_mgt(struct nbl_common_info *common,
struct nbl_dev_mgt **dev_mgt)
{
@@ -257,12 +564,19 @@ int nbl_dev_init(void *p, struct nbl_init_param *param)
if (ret)
goto setup_ctrl_dev_fail;
}
+
+ ret = nbl_dev_setup_net_dev(adapter, param);
+ if (ret)
+ goto setup_net_dev_fail;
+
ret = nbl_dev_setup_ops(dev, dev_ops_tbl, adapter);
if (ret)
goto setup_ops_fail;
return 0;
setup_ops_fail:
+ nbl_dev_remove_net_dev(adapter);
+setup_net_dev_fail:
if (common->is_ctrl)
nbl_dev_remove_ctrl_dev(adapter);
setup_ctrl_dev_fail:
@@ -284,6 +598,7 @@ void nbl_dev_remove(void *p)
&NBL_ADAP_TO_DEV_OPS_TBL(adapter);
nbl_dev_remove_ops(dev, dev_ops_tbl);
+ nbl_dev_remove_net_dev(adapter);
if (common->is_ctrl)
nbl_dev_remove_ctrl_dev(adapter);
nbl_dev_remove_common_dev(adapter);
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
index 1c2f8d7e4272..ad41117511d0 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
@@ -13,6 +13,26 @@
#include "nbl_service.h"
+static int
+nbl_serv_register_net(void *priv, struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->register_net(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ register_param, register_result);
+}
+
+static int nbl_serv_unregister_net(void *priv)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops;
+
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ return disp_ops->unregister_net(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
+}
+
static int nbl_serv_init_chip(void *priv)
{
struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
@@ -72,10 +92,71 @@ static void nbl_serv_get_eth_id(void *priv, u16 vsi_id, u8 *eth_mode,
eth_mode, eth_id, logic_eth_id);
}
+static void nbl_serv_set_netdev_ops(void *priv,
+ const struct net_device_ops *net_device_ops,
+ bool is_pf)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_net_resource_mgt *net_resource_mgt =
+ NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt);
+ struct device *dev = NBL_SERV_MGT_TO_DEV(serv_mgt);
+
+ dev_dbg(dev, "set netdev ops:%p is_pf:%d\n", net_device_ops, is_pf);
+ if (is_pf)
+ net_resource_mgt->netdev_ops.pf_netdev_ops =
+ (void *)net_device_ops;
+}
+
+static int nbl_serv_register_vsi_info(void *priv,
+ struct nbl_vsi_param *vsi_param)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ u16 vsi_index = vsi_param->index;
+ u32 num_cpus;
+
+ ring_mgt->vsi_info[vsi_index].vsi_index = vsi_index;
+ ring_mgt->vsi_info[vsi_index].vsi_id = vsi_param->vsi_id;
+ ring_mgt->vsi_info[vsi_index].ring_offset = vsi_param->queue_offset;
+ ring_mgt->vsi_info[vsi_index].ring_num = vsi_param->queue_num;
+
+ /* init active ring number before first open, guarantee fd direct
+ *config check success.
+ */
+ num_cpus = num_online_cpus();
+ ring_mgt->vsi_info[vsi_index].active_ring_num =
+ (u16)num_cpus > vsi_param->queue_num ? vsi_param->queue_num :
+ (u16)num_cpus;
+
+ /*
+ * Clear cfgs, in case this function exited abnormaly last time.
+ * only for data vsi, vf in vm only support data vsi.
+ * DPDK user vsi can not leak resource.
+ */
+ if (vsi_index == NBL_VSI_DATA)
+ disp_ops->clear_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_param->vsi_id);
+ disp_ops->register_vsi_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_index, vsi_param->queue_offset,
+ vsi_param->queue_num);
+
+ return disp_ops->register_vsi2q(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_index, vsi_param->vsi_id,
+ vsi_param->queue_offset,
+ vsi_param->queue_num);
+}
+
static struct nbl_service_ops serv_ops = {
.init_chip = nbl_serv_init_chip,
.destroy_chip = nbl_serv_destroy_chip,
+ .register_net = nbl_serv_register_net,
+ .unregister_net = nbl_serv_unregister_net,
+
+ .register_vsi_info = nbl_serv_register_vsi_info,
+
+ .set_netdev_ops = nbl_serv_set_netdev_ops,
.get_vsi_id = nbl_serv_get_vsi_id,
.get_eth_id = nbl_serv_get_eth_id,
};
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 14/15] net/nebula-matrix: add common dev start/stop operation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (11 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 13/15] net/nebula-matrix: add net dev init/reinit operation illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 15/15] net/nebula-matrix: add net " illusion.wang
` (2 subsequent siblings)
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
start common dev: config msix map table, alloc and enable msix vectors,
register mailbox ISR and enable mailbox irq
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../nebula-matrix/nbl/nbl_core/nbl_dev.c | 306 +++++++++++++-
.../nebula-matrix/nbl/nbl_core/nbl_service.c | 395 +++++++++++++++++-
.../net/ethernet/nebula-matrix/nbl/nbl_main.c | 34 +-
3 files changed, 732 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
index 2d34ab101144..cdc1baa5988d 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
@@ -11,6 +11,17 @@
#include "nbl_dev.h"
static struct nbl_dev_ops dev_ops;
+static int nbl_dev_clean_mailbox_schedule(struct nbl_dev_mgt *dev_mgt);
+
+/* ---------- Interrupt config ---------- */
+static irqreturn_t nbl_dev_clean_mailbox(int __always_unused irq, void *data)
+{
+ struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)data;
+
+ nbl_dev_clean_mailbox_schedule(dev_mgt);
+ return IRQ_HANDLED;
+}
+
static void nbl_dev_register_common_irq(struct nbl_dev_mgt *dev_mgt)
{
struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
@@ -30,6 +41,237 @@ static void nbl_dev_register_net_irq(struct nbl_dev_mgt *dev_mgt, u16 queue_num)
msix_info->serv_info[NBL_MSIX_NET_TYPE].hw_self_mask_en = 1;
}
+static int nbl_dev_request_mailbox_irq(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt);
+ u16 local_vec_id;
+ u32 irq_num;
+ int err;
+
+ if (!msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num)
+ return 0;
+
+ local_vec_id =
+ msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].base_vector_id;
+ irq_num = msix_info->msix_entries[local_vec_id].vector;
+
+ snprintf(dev_common->mailbox_name, sizeof(dev_common->mailbox_name),
+ "nbl_mailbox@pci:%s", pci_name(NBL_COMMON_TO_PDEV(common)));
+ err = devm_request_irq(dev, irq_num, nbl_dev_clean_mailbox, 0,
+ dev_common->mailbox_name, dev_mgt);
+ if (err) {
+ dev_err(dev, "Request mailbox irq handler failed err: %d\n",
+ err);
+ return err;
+ }
+
+ return 0;
+}
+
+static void nbl_dev_free_mailbox_irq(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt);
+ u16 local_vec_id;
+ u32 irq_num;
+
+ if (!msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num)
+ return;
+
+ local_vec_id =
+ msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].base_vector_id;
+ irq_num = msix_info->msix_entries[local_vec_id].vector;
+
+ devm_free_irq(dev, irq_num, dev_mgt);
+}
+
+static int nbl_dev_enable_mailbox_irq(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ u16 local_vec_id;
+
+ if (!msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num)
+ return 0;
+
+ local_vec_id =
+ msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].base_vector_id;
+ chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt),
+ NBL_CHAN_INTERRUPT_READY,
+ NBL_CHAN_TYPE_MAILBOX, true);
+
+ return serv_ops->enable_mailbox_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ local_vec_id, true);
+}
+
+static int nbl_dev_disable_mailbox_irq(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ u16 local_vec_id;
+
+ if (!msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num)
+ return 0;
+
+ nbl_common_flush_task(&dev_common->clean_mbx_task);
+
+ local_vec_id =
+ msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].base_vector_id;
+ chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt),
+ NBL_CHAN_INTERRUPT_READY,
+ NBL_CHAN_TYPE_MAILBOX, false);
+
+ return serv_ops->enable_mailbox_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ local_vec_id, false);
+}
+
+static int nbl_dev_configure_msix_map(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ u16 msix_not_net_num = 0;
+ u16 msix_net_num = msix_info->serv_info[NBL_MSIX_NET_TYPE].num;
+ bool mask_en = msix_info->serv_info[NBL_MSIX_NET_TYPE].hw_self_mask_en;
+ int err = 0;
+ int i;
+
+ for (i = NBL_MSIX_NET_TYPE; i < NBL_MSIX_TYPE_MAX; i++)
+ msix_info->serv_info[i].base_vector_id =
+ msix_info->serv_info[i - 1].base_vector_id +
+ msix_info->serv_info[i - 1].num;
+
+ for (i = NBL_MSIX_MAILBOX_TYPE; i < NBL_MSIX_TYPE_MAX; i++) {
+ if (i == NBL_MSIX_NET_TYPE)
+ continue;
+
+ msix_not_net_num += msix_info->serv_info[i].num;
+ }
+
+ err = serv_ops->configure_msix_map(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ msix_net_num,
+ msix_not_net_num,
+ mask_en);
+
+ return err;
+}
+
+static int nbl_dev_destroy_msix_map(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ return serv_ops->destroy_msix_map(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+}
+
+static int nbl_dev_alloc_msix_entries(struct nbl_dev_mgt *dev_mgt,
+ u16 num_entries)
+{
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ u16 i;
+
+ msix_info->msix_entries =
+ devm_kcalloc(NBL_DEV_MGT_TO_DEV(dev_mgt), num_entries,
+ sizeof(msix_info->msix_entries), GFP_KERNEL);
+ if (!msix_info->msix_entries)
+ return -ENOMEM;
+
+ for (i = 0; i < num_entries; i++)
+ msix_info->msix_entries[i].entry =
+ serv_ops->get_msix_entry_id(priv, i);
+
+ dev_info(NBL_DEV_MGT_TO_DEV(dev_mgt), "alloc msix entry: %u-%u.\n",
+ msix_info->msix_entries[0].entry,
+ msix_info->msix_entries[num_entries - 1].entry);
+
+ return 0;
+}
+
+static void nbl_dev_free_msix_entries(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+
+ devm_kfree(NBL_DEV_MGT_TO_DEV(dev_mgt), msix_info->msix_entries);
+ msix_info->msix_entries = NULL;
+}
+
+static int nbl_dev_alloc_msix_intr(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ int needed = 0;
+ int err;
+ int i;
+
+ for (i = 0; i < NBL_MSIX_TYPE_MAX; i++)
+ needed += msix_info->serv_info[i].num;
+
+ err = nbl_dev_alloc_msix_entries(dev_mgt, (u16)needed);
+ if (err) {
+ pr_err("Allocate msix entries failed\n");
+ return err;
+ }
+
+ err = pci_enable_msix_range(NBL_COMMON_TO_PDEV(common),
+ msix_info->msix_entries, needed, needed);
+ if (err < 0) {
+ pr_err("pci_enable_msix_range failed, err = %d.\n", err);
+ goto enable_msix_failed;
+ }
+
+ return needed;
+
+enable_msix_failed:
+ nbl_dev_free_msix_entries(dev_mgt);
+ return err;
+}
+
+static void nbl_dev_free_msix_intr(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+
+ pci_disable_msix(NBL_COMMON_TO_PDEV(common));
+ nbl_dev_free_msix_entries(dev_mgt);
+}
+
+static int nbl_dev_init_interrupt_scheme(struct nbl_dev_mgt *dev_mgt)
+{
+ int err = 0;
+
+ err = nbl_dev_alloc_msix_intr(dev_mgt);
+ if (err < 0) {
+ dev_err(NBL_DEV_MGT_TO_DEV(dev_mgt),
+ "Failed to enable MSI-X vectors\n");
+ return err;
+ }
+ return 0;
+}
+
+static void nbl_dev_clear_interrupt_scheme(struct nbl_dev_mgt *dev_mgt)
+{
+ nbl_dev_free_msix_intr(dev_mgt);
+}
+
/* ---------- Channel config ---------- */
static int nbl_dev_setup_chan_qinfo(struct nbl_dev_mgt *dev_mgt, u8 chan_type)
{
@@ -96,6 +338,14 @@ static void nbl_dev_clean_mailbox_task(struct work_struct *work)
NBL_CHAN_TYPE_MAILBOX);
}
+static int nbl_dev_clean_mailbox_schedule(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_dev_common *common_dev = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+
+ nbl_common_queue_work(&common_dev->clean_mbx_task, false);
+ return 0;
+}
+
/* ---------- Dev init process ---------- */
static int nbl_dev_setup_common_dev(struct nbl_adapter *adapter,
struct nbl_init_param *param)
@@ -572,6 +822,7 @@ int nbl_dev_init(void *p, struct nbl_init_param *param)
ret = nbl_dev_setup_ops(dev, dev_ops_tbl, adapter);
if (ret)
goto setup_ops_fail;
+
return 0;
setup_ops_fail:
@@ -605,11 +856,64 @@ void nbl_dev_remove(void *p)
nbl_dev_remove_dev_mgt(common, dev_mgt);
}
-int nbl_dev_start(void *p, struct nbl_init_param *param)
+/* ---------- Dev start process ---------- */
+
+static int nbl_dev_start_common_dev(struct nbl_adapter *adapter,
+ struct nbl_init_param *param)
{
+ struct nbl_dev_mgt *dev_mgt =
+ (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter);
+ int ret;
+
+ ret = nbl_dev_configure_msix_map(dev_mgt);
+ if (ret)
+ goto config_msix_map_err;
+
+ ret = nbl_dev_init_interrupt_scheme(dev_mgt);
+ if (ret)
+ goto init_interrupt_scheme_err;
+
+ ret = nbl_dev_request_mailbox_irq(dev_mgt);
+ if (ret)
+ goto mailbox_request_irq_err;
+
+ ret = nbl_dev_enable_mailbox_irq(dev_mgt);
+ if (ret)
+ goto enable_mailbox_irq_err;
return 0;
+enable_mailbox_irq_err:
+ nbl_dev_free_mailbox_irq(dev_mgt);
+mailbox_request_irq_err:
+ nbl_dev_clear_interrupt_scheme(dev_mgt);
+init_interrupt_scheme_err:
+ nbl_dev_destroy_msix_map(dev_mgt);
+config_msix_map_err:
+ return ret;
+}
+
+static void nbl_dev_stop_common_dev(struct nbl_adapter *adapter)
+{
+ struct nbl_dev_mgt *dev_mgt =
+ (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter);
+
+ nbl_dev_disable_mailbox_irq(dev_mgt);
+ nbl_dev_free_mailbox_irq(dev_mgt);
+ nbl_dev_clear_interrupt_scheme(dev_mgt);
+ nbl_dev_destroy_msix_map(dev_mgt);
+}
+
+int nbl_dev_start(void *p, struct nbl_init_param *param)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ int ret;
+
+ ret = nbl_dev_start_common_dev(adapter, param);
+ return ret;
}
void nbl_dev_stop(void *p)
{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+
+ nbl_dev_stop_common_dev(adapter);
}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
index ad41117511d0..b83b79a8001f 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
@@ -13,6 +13,97 @@
#include "nbl_service.h"
+static int nbl_serv_set_tx_rings(struct nbl_serv_ring_mgt *ring_mgt,
+ struct net_device *netdev, struct device *dev)
+{
+ u16 ring_num = ring_mgt->tx_ring_num;
+ int i;
+
+ ring_mgt->tx_rings = devm_kcalloc(dev, ring_num,
+ sizeof(*ring_mgt->tx_rings),
+ GFP_KERNEL);
+ if (!ring_mgt->tx_rings)
+ return -ENOMEM;
+
+ for (i = 0; i < ring_num; i++)
+ ring_mgt->tx_rings[i].index = i;
+
+ return 0;
+}
+
+static void nbl_serv_remove_tx_ring(struct nbl_serv_ring_mgt *ring_mgt,
+ struct device *dev)
+{
+ devm_kfree(dev, ring_mgt->tx_rings);
+ ring_mgt->tx_rings = NULL;
+}
+
+static int nbl_serv_set_rx_rings(struct nbl_serv_ring_mgt *ring_mgt,
+ struct net_device *netdev, struct device *dev)
+{
+ u16 ring_num = ring_mgt->rx_ring_num;
+ int i;
+
+ ring_mgt->rx_rings = devm_kcalloc(dev, ring_num,
+ sizeof(*ring_mgt->rx_rings),
+ GFP_KERNEL);
+ if (!ring_mgt->rx_rings)
+ return -ENOMEM;
+
+ for (i = 0; i < ring_num; i++)
+ ring_mgt->rx_rings[i].index = i;
+
+ return 0;
+}
+
+static void nbl_serv_remove_rx_ring(struct nbl_serv_ring_mgt *ring_mgt,
+ struct device *dev)
+{
+ devm_kfree(dev, ring_mgt->rx_rings);
+ ring_mgt->rx_rings = NULL;
+}
+
+static int nbl_serv_set_vectors(struct nbl_service_mgt *serv_mgt,
+ struct net_device *netdev, struct device *dev)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_resource_pt_ops *pt_ops = NBL_ADAPTER_TO_RES_PT_OPS(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt);
+ u16 ring_num = ring_mgt->rx_ring_num;
+ int i;
+
+ ring_mgt->vectors = devm_kcalloc(dev, ring_num,
+ sizeof(*ring_mgt->vectors),
+ GFP_KERNEL);
+ if (!ring_mgt->vectors)
+ return -ENOMEM;
+
+ for (i = 0; i < ring_num; i++) {
+ ring_mgt->vectors[i].nbl_napi =
+ disp_ops->get_vector_napi(p, i);
+ netif_napi_add(netdev, &ring_mgt->vectors[i].nbl_napi->napi,
+ pt_ops->napi_poll);
+ ring_mgt->vectors[i].netdev = netdev;
+ }
+
+ return 0;
+}
+
+static void nbl_serv_remove_vectors(struct nbl_serv_ring_mgt *ring_mgt,
+ struct device *dev)
+{
+ u16 ring_num = ring_mgt->rx_ring_num;
+ int i;
+
+ for (i = 0; i < ring_num; i++)
+ netif_napi_del(&ring_mgt->vectors[i].nbl_napi->napi);
+
+ devm_kfree(dev, ring_mgt->vectors);
+ ring_mgt->vectors = NULL;
+}
+
static int
nbl_serv_register_net(void *priv, struct nbl_register_net_param *register_param,
struct nbl_register_net_result *register_result)
@@ -33,6 +124,227 @@ static int nbl_serv_unregister_net(void *priv)
return disp_ops->unregister_net(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
}
+static int nbl_serv_setup_txrx_queues(void *priv, u16 vsi_id, u16 queue_num,
+ u16 net_vector_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ struct nbl_serv_vector *vec;
+ void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt);
+ int i, ret = 0;
+
+ /* queue_num include user&kernel queue */
+ ret = disp_ops->alloc_txrx_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_id, queue_num);
+ if (ret)
+ return -EFAULT;
+
+ /* ring_mgt->tx_ring_number only for kernel use */
+ for (i = 0; i < ring_mgt->tx_ring_num; i++) {
+ ring_mgt->tx_rings[i].local_queue_id = NBL_PAIR_ID_GET_TX(i);
+ ring_mgt->rx_rings[i].local_queue_id = NBL_PAIR_ID_GET_RX(i);
+ }
+
+ for (i = 0; i < ring_mgt->rx_ring_num; i++) {
+ vec = &ring_mgt->vectors[i];
+ vec->local_vec_id = i + net_vector_id;
+ vec->global_vec_id =
+ disp_ops->get_global_vector(p,
+ vsi_id,
+ vec->local_vec_id);
+ vec->irq_enable_base = (u8 __iomem *)
+ disp_ops->get_msix_irq_enable_info(p,
+ vec->global_vec_id,
+ &vec->irq_data);
+
+ disp_ops->set_vector_info(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vec->irq_enable_base,
+ vec->irq_data, i,
+ ring_mgt->net_msix_mask_en);
+ }
+
+ return 0;
+}
+
+static void nbl_serv_remove_txrx_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops;
+
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ disp_ops->free_txrx_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id);
+}
+
+static int nbl_serv_setup_q2vsi(void *priv, u16 vsi_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->setup_q2vsi(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_id);
+}
+
+static void nbl_serv_remove_q2vsi(void *priv, u16 vsi_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ disp_ops->remove_q2vsi(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id);
+}
+
+static int nbl_serv_setup_rss(void *priv, u16 vsi_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->setup_rss(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id);
+}
+
+static void nbl_serv_remove_rss(void *priv, u16 vsi_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ disp_ops->remove_rss(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id);
+}
+
+static int nbl_serv_setup_rss_indir(void *priv, u16 vsi_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_serv_ring_vsi_info *vsi_info =
+ &ring_mgt->vsi_info[NBL_VSI_DATA];
+ struct device *dev = NBL_SERV_MGT_TO_DEV(serv_mgt);
+ u32 rxfh_indir_size = 0;
+ int num_cpus = 0, real_qps = 0;
+ u32 *indir = NULL;
+ int i = 0;
+
+ disp_ops->get_rxfh_indir_size(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_id, &rxfh_indir_size);
+ indir = devm_kcalloc(dev, rxfh_indir_size, sizeof(u32), GFP_KERNEL);
+ if (!indir)
+ return -ENOMEM;
+
+ num_cpus = num_online_cpus();
+ real_qps = num_cpus > vsi_info->ring_num ? vsi_info->ring_num :
+ num_cpus;
+
+ for (i = 0; i < rxfh_indir_size; i++)
+ indir[i] = i % real_qps;
+
+ disp_ops->set_rxfh_indir(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id,
+ indir, rxfh_indir_size);
+ devm_kfree(dev, indir);
+ return 0;
+}
+
+static int nbl_serv_alloc_rings(void *priv, struct net_device *netdev,
+ struct nbl_ring_param *param)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt;
+ struct nbl_dispatch_ops *disp_ops;
+ struct device *dev;
+ int ret = 0;
+
+ dev = NBL_SERV_MGT_TO_DEV(serv_mgt);
+ ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ ring_mgt->tx_ring_num = param->tx_ring_num;
+ ring_mgt->rx_ring_num = param->rx_ring_num;
+ ring_mgt->tx_desc_num = param->queue_size;
+ ring_mgt->rx_desc_num = param->queue_size;
+
+ ret = disp_ops->alloc_rings(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), netdev,
+ param);
+ if (ret)
+ goto alloc_rings_fail;
+
+ ret = nbl_serv_set_tx_rings(ring_mgt, netdev, dev);
+ if (ret)
+ goto set_tx_fail;
+ ret = nbl_serv_set_rx_rings(ring_mgt, netdev, dev);
+ if (ret)
+ goto set_rx_fail;
+
+ ret = nbl_serv_set_vectors(serv_mgt, netdev, dev);
+ if (ret)
+ goto set_vectors_fail;
+
+ return 0;
+
+set_vectors_fail:
+ nbl_serv_remove_rx_ring(ring_mgt, dev);
+set_rx_fail:
+ nbl_serv_remove_tx_ring(ring_mgt, dev);
+set_tx_fail:
+ disp_ops->remove_rings(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
+alloc_rings_fail:
+ return ret;
+}
+
+static void nbl_serv_free_rings(void *priv)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt;
+ struct nbl_dispatch_ops *disp_ops;
+ struct device *dev;
+
+ dev = NBL_SERV_MGT_TO_DEV(serv_mgt);
+ ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ nbl_serv_remove_vectors(ring_mgt, dev);
+ nbl_serv_remove_rx_ring(ring_mgt, dev);
+ nbl_serv_remove_tx_ring(ring_mgt, dev);
+
+ disp_ops->remove_rings(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
+}
+
+static int nbl_serv_enable_napis(void *priv, u16 vsi_index)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_serv_ring_vsi_info *vsi_info =
+ &ring_mgt->vsi_info[vsi_index];
+ u16 start = vsi_info->ring_offset,
+ end = vsi_info->ring_offset + vsi_info->ring_num;
+ int i;
+
+ for (i = start; i < end; i++)
+ napi_enable(&ring_mgt->vectors[i].nbl_napi->napi);
+
+ return 0;
+}
+
+static void nbl_serv_disable_napis(void *priv, u16 vsi_index)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_serv_ring_vsi_info *vsi_info =
+ &ring_mgt->vsi_info[vsi_index];
+ u16 start = vsi_info->ring_offset,
+ end = vsi_info->ring_offset + vsi_info->ring_num;
+ int i;
+
+ for (i = start; i < end; i++)
+ napi_disable(&ring_mgt->vectors[i].nbl_napi->napi);
+}
+
+static void nbl_serv_set_mask_en(void *priv, bool enable)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt;
+
+ ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ ring_mgt->net_msix_mask_en = enable;
+}
+
static int nbl_serv_init_chip(void *priv)
{
struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
@@ -73,6 +385,68 @@ static int nbl_serv_destroy_chip(void *p)
return 0;
}
+static int nbl_serv_configure_msix_map(void *priv, u16 num_net_msix,
+ u16 num_others_msix,
+ bool net_msix_mask_en)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops;
+ int ret = 0;
+
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ ret = disp_ops->configure_msix_map(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ num_net_msix, num_others_msix,
+ net_msix_mask_en);
+ if (ret)
+ return -EIO;
+
+ return 0;
+}
+
+static int nbl_serv_destroy_msix_map(void *priv)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops;
+ int ret = 0;
+
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ ret = disp_ops->destroy_msix_map(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
+ if (ret)
+ return -EIO;
+
+ return 0;
+}
+
+static int nbl_serv_enable_mailbox_irq(void *priv, u16 vector_id,
+ bool enable_msix)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops;
+ int ret = 0;
+
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ ret = disp_ops->enable_mailbox_irq(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vector_id, enable_msix);
+ if (ret)
+ return -EIO;
+
+ return 0;
+}
+
+static u16 nbl_serv_get_msix_entry_id(void *priv, u16 local_vec_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->get_msix_entry_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ NBL_COMMON_TO_VSI_ID(common),
+ local_vec_id);
+}
+
static u16 nbl_serv_get_vsi_id(void *priv, u16 func_id, u16 type)
{
struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
@@ -151,11 +525,30 @@ static struct nbl_service_ops serv_ops = {
.init_chip = nbl_serv_init_chip,
.destroy_chip = nbl_serv_destroy_chip,
+ .configure_msix_map = nbl_serv_configure_msix_map,
+ .destroy_msix_map = nbl_serv_destroy_msix_map,
+ .enable_mailbox_irq = nbl_serv_enable_mailbox_irq,
+
+ .get_msix_entry_id = nbl_serv_get_msix_entry_id,
+
.register_net = nbl_serv_register_net,
.unregister_net = nbl_serv_unregister_net,
-
+ .setup_txrx_queues = nbl_serv_setup_txrx_queues,
+ .remove_txrx_queues = nbl_serv_remove_txrx_queues,
+
+ .setup_q2vsi = nbl_serv_setup_q2vsi,
+ .remove_q2vsi = nbl_serv_remove_q2vsi,
+ .setup_rss = nbl_serv_setup_rss,
+ .remove_rss = nbl_serv_remove_rss,
+ .setup_rss_indir = nbl_serv_setup_rss_indir,
.register_vsi_info = nbl_serv_register_vsi_info,
+ .alloc_rings = nbl_serv_alloc_rings,
+ .free_rings = nbl_serv_free_rings,
+ .enable_napis = nbl_serv_enable_napis,
+ .disable_napis = nbl_serv_disable_napis,
+ .set_mask_en = nbl_serv_set_mask_en,
+
.set_netdev_ops = nbl_serv_set_netdev_ops,
.get_vsi_id = nbl_serv_get_vsi_id,
.get_eth_id = nbl_serv_get_eth_id,
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
index 624e8a75a1ff..354951ff366f 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
@@ -314,7 +314,39 @@ static struct pci_driver nbl_driver = {
.remove = nbl_remove,
};
-module_pci_driver(nbl_driver);
+static int __init nbl_module_init(void)
+{
+ int status;
+
+ status = nbl_common_create_wq();
+ if (status) {
+ pr_err("Failed to create wq, err = %d\n", status);
+ goto wq_create_failed;
+ }
+ status = pci_register_driver(&nbl_driver);
+ if (status) {
+ pr_err("Failed to register PCI driver, err = %d\n", status);
+ goto pci_register_driver_failed;
+ }
+ pr_info("nbl module loaded\n");
+ return 0;
+
+pci_register_driver_failed:
+ nbl_common_destroy_wq();
+wq_create_failed:
+ return status;
+}
+
+static void __exit nbl_module_exit(void)
+{
+ pci_unregister_driver(&nbl_driver);
+
+ nbl_common_destroy_wq();
+
+ pr_info("nbl module unloaded\n");
+}
+module_init(nbl_module_init);
+module_exit(nbl_module_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Nebula Matrix Network Driver");
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 15/15] net/nebula-matrix: add net dev start/stop operation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (12 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 14/15] net/nebula-matrix: add common dev start/stop operation illusion.wang
@ 2026-01-23 1:17 ` illusion.wang
2026-01-23 3:15 ` [PATCH v3 net-next 12/15] net/nebula-matrix: add common/ctrl dev init/reinit operation illusion.wang
2026-01-27 18:06 ` [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs Simon Horman
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 1:17 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
start net dev:
1 alloc netdev with multi-queue support, config private data
and associatess with the adapter.
2 alloc tx/rx rings, set up network resource managements(vlan,rate
limiting)
3 build the netdev structure, map queues to msix interrupts, init
hw stats.
4 start net vsi and register net irq.
5 register netdev
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../net/ethernet/nebula-matrix/nbl/nbl_core.h | 26 +
.../nebula-matrix/nbl/nbl_core/nbl_dev.c | 450 +++++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_service.c | 595 ++++++++++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_service.h | 6 +
.../nbl/nbl_include/nbl_def_common.h | 1 +
.../nbl/nbl_include/nbl_def_service.h | 1 -
.../nbl/nbl_include/nbl_include.h | 43 ++
7 files changed, 1121 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
index e39319cee159..72227da4458b 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
@@ -43,6 +43,18 @@
#define NBL_NETDEV_TO_ADAPTER(netdev) \
(NBL_NETDEV_PRIV_TO_ADAPTER( \
(struct nbl_netdev_priv *)netdev_priv(netdev)))
+
+#define NBL_NETDEV_TO_SERV_MGT(netdev) \
+ (NBL_ADAP_TO_SERV_MGT(NBL_NETDEV_PRIV_TO_ADAPTER(\
+ (struct nbl_netdev_priv *)netdev_priv(netdev))))
+
+#define NBL_NETDEV_TO_DEV_MGT(netdev) \
+ (NBL_ADAP_TO_DEV_MGT(NBL_NETDEV_TO_ADAPTER(netdev)))
+
+#define NBL_NETDEV_TO_COMMON(netdev) \
+ (NBL_ADAP_TO_COMMON(NBL_NETDEV_PRIV_TO_ADAPTER(\
+ (struct nbl_netdev_priv *)netdev_priv(netdev))))
+
#define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1)
#define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT)
@@ -59,6 +71,19 @@ enum {
NBL_CAP_IS_OCP_BIT,
};
+enum nbl_adapter_state {
+ NBL_DOWN,
+ NBL_RESETTING,
+ NBL_RESET_REQUESTED,
+ NBL_INITING,
+ NBL_INIT_FAILED,
+ NBL_RUNNING,
+ NBL_TESTING,
+ NBL_USER,
+ NBL_FATAL_ERR,
+ NBL_STATE_NBITS
+};
+
struct nbl_interface {
struct nbl_hw_ops_tbl *hw_ops_tbl;
struct nbl_resource_ops_tbl *resource_ops_tbl;
@@ -84,6 +109,7 @@ struct nbl_adapter {
struct nbl_common_info common;
struct nbl_product_base_ops *product_base_ops;
struct nbl_init_param init_param;
+ DECLARE_BITMAP(state, NBL_STATE_NBITS);
};
struct nbl_netdev_priv {
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
index cdc1baa5988d..6a615594013d 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
@@ -13,6 +13,18 @@
static struct nbl_dev_ops dev_ops;
static int nbl_dev_clean_mailbox_schedule(struct nbl_dev_mgt *dev_mgt);
+static void nbl_dev_set_netdev_priv(struct net_device *netdev,
+ struct nbl_dev_vsi *vsi)
+{
+ struct nbl_netdev_priv *net_priv = netdev_priv(netdev);
+
+ net_priv->tx_queue_num = vsi->queue_num;
+ net_priv->rx_queue_num = vsi->queue_num;
+ net_priv->queue_size = vsi->queue_size;
+ net_priv->netdev = netdev;
+ net_priv->data_vsi = vsi->vsi_id;
+}
+
/* ---------- Interrupt config ---------- */
static irqreturn_t nbl_dev_clean_mailbox(int __always_unused irq, void *data)
{
@@ -41,6 +53,59 @@ static void nbl_dev_register_net_irq(struct nbl_dev_mgt *dev_mgt, u16 queue_num)
msix_info->serv_info[NBL_MSIX_NET_TYPE].hw_self_mask_en = 1;
}
+static int nbl_dev_request_net_irq(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ struct nbl_msix_info_param param = { 0 };
+ int msix_num = msix_info->serv_info[NBL_MSIX_NET_TYPE].num;
+ int ret = 0;
+
+ param.msix_entries =
+ kcalloc(msix_num, sizeof(*param.msix_entries), GFP_KERNEL);
+ if (!param.msix_entries)
+ return -ENOMEM;
+
+ param.msix_num = msix_num;
+ memcpy(param.msix_entries,
+ msix_info->msix_entries +
+ msix_info->serv_info[NBL_MSIX_NET_TYPE].base_vector_id,
+ sizeof(param.msix_entries[0]) * msix_num);
+
+ ret = serv_ops->request_net_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ ¶m);
+
+ kfree(param.msix_entries);
+ return ret;
+}
+
+static void nbl_dev_free_net_irq(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ struct nbl_msix_info_param param = { 0 };
+ int msix_num = msix_info->serv_info[NBL_MSIX_NET_TYPE].num;
+
+ param.msix_entries =
+ kcalloc(msix_num, sizeof(*param.msix_entries), GFP_KERNEL);
+ if (!param.msix_entries)
+ return;
+
+ param.msix_num = msix_num;
+ memcpy(param.msix_entries,
+ msix_info->msix_entries +
+ msix_info->serv_info[NBL_MSIX_NET_TYPE].base_vector_id,
+ sizeof(param.msix_entries[0]) * msix_num);
+
+ serv_ops->free_net_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), ¶m);
+
+ kfree(param.msix_entries);
+}
+
static int nbl_dev_request_mailbox_irq(struct nbl_dev_mgt *dev_mgt)
{
struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
@@ -445,7 +510,74 @@ static void nbl_dev_remove_ctrl_dev(struct nbl_adapter *adapter)
serv_ops->destroy_chip(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
}
+static int nbl_dev_netdev_open(struct net_device *netdev)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ return serv_ops->netdev_open(netdev);
+}
+
+static int nbl_dev_netdev_stop(struct net_device *netdev)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ return serv_ops->netdev_stop(netdev);
+}
+
+static netdev_tx_t nbl_dev_start_xmit(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_resource_pt_ops *pt_ops = NBL_DEV_MGT_TO_RES_PT_OPS(dev_mgt);
+
+ return pt_ops->start_xmit(skb, netdev);
+}
+
+static void nbl_dev_netdev_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ serv_ops->get_stats64(netdev, stats);
+}
+
+static netdev_features_t
+nbl_dev_netdev_features_check(struct sk_buff *skb, struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ return serv_ops->features_check(skb, netdev, features);
+}
+
+static u16 nbl_dev_netdev_select_queue(struct net_device *netdev,
+ struct sk_buff *skb,
+ struct net_device *sb_dev)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ return serv_ops->select_queue(netdev, skb, sb_dev);
+}
+
static const struct net_device_ops netdev_ops_leonis_pf = {
+ .ndo_open = nbl_dev_netdev_open,
+ .ndo_stop = nbl_dev_netdev_stop,
+ .ndo_start_xmit = nbl_dev_start_xmit,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_get_stats64 = nbl_dev_netdev_get_stats64,
+ .ndo_features_check = nbl_dev_netdev_features_check,
+ .ndo_select_queue = nbl_dev_netdev_select_queue,
};
static int nbl_dev_setup_netops_leonis(void *priv, struct net_device *netdev,
@@ -460,6 +592,63 @@ static int nbl_dev_setup_netops_leonis(void *priv, struct net_device *netdev,
return 0;
}
+static void nbl_dev_remove_netops(struct net_device *netdev)
+{
+ netdev->netdev_ops = NULL;
+}
+
+static int nbl_dev_cfg_netdev(struct net_device *netdev,
+ struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param,
+ struct nbl_register_net_result *register_result)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_dev_net_ops *net_dev_ops =
+ NBL_DEV_MGT_TO_NETDEV_OPS(dev_mgt);
+ u64 vlan_features = 0;
+ int ret = 0;
+
+ if (param->pci_using_dac)
+ netdev->features |= NETIF_F_HIGHDMA;
+ netdev->watchdog_timeo = 5 * HZ;
+
+ vlan_features = register_result->vlan_features ?
+ register_result->vlan_features :
+ register_result->features;
+ netdev->hw_features |=
+ nbl_features_to_netdev_features(register_result->hw_features);
+ netdev->features |=
+ nbl_features_to_netdev_features(register_result->features);
+ netdev->vlan_features |= nbl_features_to_netdev_features(vlan_features);
+
+ netdev->priv_flags |= IFF_UNICAST_FLT;
+
+ SET_DEV_MIN_MTU(netdev, ETH_MIN_MTU);
+ SET_DEV_MAX_MTU(netdev, register_result->max_mtu);
+ netdev->mtu = min_t(u16, register_result->max_mtu, NBL_DEFAULT_MTU);
+ if (is_valid_ether_addr(register_result->mac))
+ eth_hw_addr_set(netdev, register_result->mac);
+ else
+ eth_hw_addr_random(netdev);
+
+ ether_addr_copy(netdev->perm_addr, netdev->dev_addr);
+
+ netdev->needed_headroom =
+ serv_ops->get_tx_headroom(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+
+ ret = net_dev_ops->setup_netdev_ops(dev_mgt, netdev, param);
+ if (ret)
+ goto set_ops_fail;
+ return 0;
+set_ops_fail:
+ return ret;
+}
+
+static void nbl_dev_reset_netdev(struct net_device *netdev)
+{
+ nbl_dev_remove_netops(netdev);
+}
+
static int nbl_dev_register_net(struct nbl_dev_mgt *dev_mgt,
struct nbl_register_net_result *register_result)
{
@@ -528,6 +717,62 @@ static void nbl_dev_vsi_common_remove(struct nbl_dev_mgt *dev_mgt,
{
}
+static int nbl_dev_vsi_common_start(struct nbl_dev_mgt *dev_mgt,
+ struct net_device *netdev,
+ struct nbl_dev_vsi *vsi)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt);
+ void *p = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt);
+ int ret;
+
+ vsi->napi_netdev = netdev;
+
+ ret = serv_ops->setup_q2vsi(p, vsi->vsi_id);
+ if (ret) {
+ dev_err(dev, "Setup q2vsi failed\n");
+ goto set_q2vsi_fail;
+ }
+
+ ret = serv_ops->setup_rss(p, vsi->vsi_id);
+ if (ret) {
+ dev_err(dev, "Setup rss failed\n");
+ goto set_rss_fail;
+ }
+
+ ret = serv_ops->setup_rss_indir(p, vsi->vsi_id);
+ if (ret) {
+ dev_err(dev, "Setup rss indir failed\n");
+ goto setup_rss_indir_fail;
+ }
+
+ ret = serv_ops->enable_napis(p, vsi->index);
+ if (ret) {
+ dev_err(dev, "Enable napis failed\n");
+ goto enable_napi_fail;
+ }
+ return 0;
+
+enable_napi_fail:
+setup_rss_indir_fail:
+ serv_ops->remove_rss(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id);
+set_rss_fail:
+ serv_ops->remove_q2vsi(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id);
+set_q2vsi_fail:
+ return ret;
+}
+
+static void nbl_dev_vsi_common_stop(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_dev_vsi *vsi)
+{
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ serv_ops->disable_napis(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ vsi->index);
+ serv_ops->remove_rss(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id);
+ serv_ops->remove_q2vsi(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id);
+}
+
static int nbl_dev_vsi_data_register(struct nbl_dev_mgt *dev_mgt,
struct nbl_init_param *param,
void *vsi_data)
@@ -564,12 +809,55 @@ static void nbl_dev_vsi_data_remove(struct nbl_dev_mgt *dev_mgt, void *vsi_data)
nbl_dev_vsi_common_remove(dev_mgt, vsi);
}
+static int nbl_dev_vsi_data_start(void *dev_priv, struct net_device *netdev,
+ void *vsi_data)
+{
+ struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)dev_priv;
+ struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data;
+ int ret;
+
+ ret = nbl_dev_vsi_common_start(dev_mgt, netdev, vsi);
+ return ret;
+}
+
+static void nbl_dev_vsi_data_stop(void *dev_priv, void *vsi_data)
+{
+ struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)dev_priv;
+ struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data;
+
+ nbl_dev_vsi_common_stop(dev_mgt, vsi);
+}
+
+static int nbl_dev_vsi_data_netdev_build(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param,
+ struct net_device *netdev,
+ void *vsi_data)
+{
+ struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data;
+
+ vsi->netdev = netdev;
+ return nbl_dev_cfg_netdev(netdev, dev_mgt, param,
+ &vsi->register_result);
+}
+
+static void nbl_dev_vsi_data_netdev_destroy(struct nbl_dev_mgt *dev_mgt,
+ void *vsi_data)
+{
+ struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data;
+
+ nbl_dev_reset_netdev(vsi->netdev);
+}
+
static struct nbl_dev_vsi_tbl vsi_tbl[NBL_VSI_MAX] = {
[NBL_VSI_DATA] = {
.vsi_ops = {
.register_vsi = nbl_dev_vsi_data_register,
.setup = nbl_dev_vsi_data_setup,
.remove = nbl_dev_vsi_data_remove,
+ .start = nbl_dev_vsi_data_start,
+ .stop = nbl_dev_vsi_data_stop,
+ .netdev_build = nbl_dev_vsi_data_netdev_build,
+ .netdev_destroy = nbl_dev_vsi_data_netdev_destroy,
},
},
};
@@ -857,6 +1145,156 @@ void nbl_dev_remove(void *p)
}
/* ---------- Dev start process ---------- */
+static int nbl_dev_start_net_dev(struct nbl_adapter *adapter,
+ struct nbl_init_param *param)
+{
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct net_device *netdev = net_dev->netdev;
+ struct nbl_netdev_priv *net_priv;
+ struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt);
+ void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt);
+ struct nbl_ring_param ring_param = {0};
+ struct nbl_dev_vsi *vsi;
+ u16 net_vector_id, queue_num;
+ int ret;
+
+ vsi = nbl_dev_vsi_select(dev_mgt, NBL_VSI_DATA);
+ if (!vsi)
+ return -EFAULT;
+
+ queue_num = vsi->queue_num;
+ netdev = alloc_etherdev_mqs(sizeof(struct nbl_netdev_priv), queue_num,
+ queue_num);
+ if (!netdev) {
+ dev_err(dev, "Alloc net device failed\n");
+ ret = -ENOMEM;
+ goto alloc_netdev_fail;
+ }
+
+ SET_NETDEV_DEV(netdev, dev);
+ net_priv = netdev_priv(netdev);
+ net_priv->adapter = adapter;
+ nbl_dev_set_netdev_priv(netdev, vsi);
+
+ net_dev->netdev = netdev;
+ common->msg_enable = netif_msg_init(-1, DEFAULT_MSG_ENABLE);
+ serv_ops->set_mask_en(priv, 1);
+
+ ring_param.tx_ring_num = net_dev->kernel_queue_num;
+ ring_param.rx_ring_num = net_dev->kernel_queue_num;
+ ring_param.queue_size = net_priv->queue_size;
+ ret = serv_ops->alloc_rings(priv, netdev, &ring_param);
+ if (ret) {
+ dev_err(dev, "Alloc rings failed\n");
+ goto alloc_rings_fail;
+ }
+
+ ret = serv_ops->setup_net_resource_mgt(priv, netdev,
+ vsi->register_result.vlan_proto,
+ vsi->register_result.vlan_tci,
+ vsi->register_result.rate);
+ if (ret) {
+ dev_err(dev, "setup net mgt failed\n");
+ goto setup_net_mgt_fail;
+ }
+
+ /* netdev build must before setup_txrx_queues. Because snoop check mac
+ * trust the mac if pf use ip link cfg the mac for vf. We judge the
+ * case will not permit accord queue has alloced when vf modify mac.
+ */
+ ret = vsi->ops->netdev_build(dev_mgt, param, netdev, vsi);
+ if (ret) {
+ dev_err(dev, "Build netdev failed, selected vsi %d\n",
+ vsi->index);
+ goto build_netdev_fail;
+ }
+
+ net_vector_id = msix_info->serv_info[NBL_MSIX_NET_TYPE].base_vector_id;
+ ret = serv_ops->setup_txrx_queues(priv, vsi->vsi_id,
+ net_dev->kernel_queue_num,
+ net_vector_id);
+ if (ret) {
+ dev_err(dev, "Set queue map failed\n");
+ goto set_queue_fail;
+ }
+
+ ret = vsi->ops->start(dev_mgt, netdev, vsi);
+ if (ret) {
+ dev_err(dev, "Start vsi failed, selected vsi %d\n", vsi->index);
+ goto start_vsi_fail;
+ }
+
+ ret = nbl_dev_request_net_irq(dev_mgt);
+ if (ret) {
+ dev_err(dev, "request irq failed\n");
+ goto request_irq_fail;
+ }
+
+ netif_carrier_off(netdev);
+ ret = register_netdev(netdev);
+ if (ret) {
+ dev_err(dev, "Register netdev failed\n");
+ goto register_netdev_fail;
+ }
+ set_bit(NBL_DOWN, adapter->state);
+
+ return 0;
+register_netdev_fail:
+ nbl_dev_free_net_irq(dev_mgt);
+request_irq_fail:
+ vsi->ops->stop(dev_mgt, vsi);
+start_vsi_fail:
+ serv_ops->remove_txrx_queues(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ vsi->vsi_id);
+set_queue_fail:
+ vsi->ops->netdev_destroy(dev_mgt, vsi);
+build_netdev_fail:
+ serv_ops->remove_net_resource_mgt(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+setup_net_mgt_fail:
+ serv_ops->free_rings(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+alloc_rings_fail:
+ free_netdev(netdev);
+alloc_netdev_fail:
+ return ret;
+}
+
+static void nbl_dev_stop_net_dev(struct nbl_adapter *adapter)
+{
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_dev_vsi *vsi;
+ struct net_device *netdev;
+
+ if (!net_dev)
+ return;
+
+ netdev = net_dev->netdev;
+ vsi = net_dev->vsi_ctrl.vsi_list[NBL_VSI_DATA];
+ if (!vsi)
+ return;
+
+ unregister_netdev(netdev);
+ rtnl_lock();
+ netif_device_detach(netdev);
+ rtnl_unlock();
+
+ vsi->ops->netdev_destroy(dev_mgt, vsi);
+ vsi->ops->stop(dev_mgt, vsi);
+ nbl_dev_free_net_irq(dev_mgt);
+ serv_ops->remove_net_resource_mgt(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+ serv_ops->remove_txrx_queues(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ vsi->vsi_id);
+ serv_ops->free_rings(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+
+ free_netdev(netdev);
+}
static int nbl_dev_start_common_dev(struct nbl_adapter *adapter,
struct nbl_init_param *param)
@@ -908,6 +1346,17 @@ int nbl_dev_start(void *p, struct nbl_init_param *param)
int ret;
ret = nbl_dev_start_common_dev(adapter, param);
+ if (ret)
+ goto start_common_dev_fail;
+
+ ret = nbl_dev_start_net_dev(adapter, param);
+ if (ret)
+ goto start_net_dev_fail;
+ return 0;
+
+start_net_dev_fail:
+ nbl_dev_stop_common_dev(adapter);
+start_common_dev_fail:
return ret;
}
@@ -915,5 +1364,6 @@ void nbl_dev_stop(void *p)
{
struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ nbl_dev_stop_net_dev(adapter);
nbl_dev_stop_common_dev(adapter);
}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
index b83b79a8001f..11e590bb00dc 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
@@ -13,6 +13,146 @@
#include "nbl_service.h"
+static int nbl_serv_set_promisc_mode(void *priv, u16 vsi_id, u16 mode);
+
+static void nbl_serv_set_queue_param(struct nbl_serv_ring *ring, u16 desc_num,
+ struct nbl_txrx_queue_param *param,
+ u16 vsi_id, u16 global_vec_id)
+{
+ param->vsi_id = vsi_id;
+ param->dma = ring->dma;
+ param->desc_num = desc_num;
+ param->local_queue_id = ring->local_queue_id / 2;
+ param->global_vec_id = global_vec_id;
+ param->intr_en = 1;
+ param->intr_mask = 1;
+ param->extend_header = 1;
+ param->rxcsum = 1;
+ param->split = 0;
+}
+
+/*
+ * In virtio mode, the emulator triggers the configuration of
+ * txrx_registers only based on tx_ring, so the rx_info needs
+ * to be delivered first before the tx_info can be delivered.
+ */
+static int nbl_serv_setup_queues(struct nbl_service_mgt *serv_mgt,
+ struct nbl_serv_ring_vsi_info *vsi_info)
+{
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_txrx_queue_param param = {0};
+ struct nbl_serv_ring *ring;
+ struct nbl_serv_vector *vector;
+ u16 start = vsi_info->ring_offset,
+ end = vsi_info->ring_offset + vsi_info->ring_num;
+ int i, ret = 0;
+
+ for (i = start; i < end; i++) {
+ vector = &ring_mgt->vectors[i];
+ ring = &ring_mgt->rx_rings[i];
+ nbl_serv_set_queue_param(ring, ring_mgt->rx_desc_num, ¶m,
+ vsi_info->vsi_id,
+ vector->global_vec_id);
+
+ ret = disp_ops->setup_queue(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ ¶m, false);
+ if (ret)
+ return ret;
+ }
+
+ for (i = start; i < end; i++) {
+ vector = &ring_mgt->vectors[i];
+ ring = &ring_mgt->tx_rings[i];
+ nbl_serv_set_queue_param(ring, ring_mgt->tx_desc_num, ¶m,
+ vsi_info->vsi_id,
+ vector->global_vec_id);
+
+ ret = disp_ops->setup_queue(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ ¶m, true);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void nbl_serv_flush_rx_queues(struct nbl_service_mgt *serv_mgt,
+ u16 ring_offset, u16 ring_num)
+{
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ int i;
+
+ for (i = ring_offset; i < ring_offset + ring_num; i++)
+ disp_ops->kick_rx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i);
+}
+
+static int nbl_serv_setup_rings(struct nbl_service_mgt *serv_mgt,
+ struct net_device *netdev,
+ struct nbl_serv_ring_vsi_info *vsi_info,
+ bool use_napi)
+{
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt);
+ u16 start = vsi_info->ring_offset,
+ end = vsi_info->ring_offset + vsi_info->ring_num;
+ int i, ret = 0;
+
+ for (i = start; i < end; i++) {
+ ring_mgt->tx_rings[i].dma =
+ disp_ops->start_tx_ring(p, i);
+ if (!ring_mgt->tx_rings[i].dma) {
+ netdev_err(netdev, "Fail to start tx ring %d", i);
+ ret = -EFAULT;
+ break;
+ }
+ }
+ if (i != end) {
+ while (--i + 1 > start)
+ disp_ops->stop_tx_ring(p, i);
+ goto tx_err;
+ }
+
+ for (i = start; i < end; i++) {
+ ring_mgt->rx_rings[i].dma =
+ disp_ops->start_rx_ring(p, i, use_napi);
+ if (!ring_mgt->rx_rings[i].dma) {
+ netdev_err(netdev, "Fail to start rx ring %d", i);
+ ret = -EFAULT;
+ break;
+ }
+ }
+ if (i != end) {
+ while (--i + 1 > start)
+ disp_ops->stop_rx_ring(p, i);
+ goto rx_err;
+ }
+
+ return 0;
+
+rx_err:
+ for (i = start; i < end; i++)
+ disp_ops->stop_tx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i);
+tx_err:
+ return ret;
+}
+
+static void nbl_serv_stop_rings(struct nbl_service_mgt *serv_mgt,
+ struct nbl_serv_ring_vsi_info *vsi_info)
+{
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ u16 start = vsi_info->ring_offset,
+ end = vsi_info->ring_offset + vsi_info->ring_num;
+ int i;
+
+ for (i = start; i < end; i++)
+ disp_ops->stop_tx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i);
+
+ for (i = start; i < end; i++)
+ disp_ops->stop_rx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i);
+}
+
static int nbl_serv_set_tx_rings(struct nbl_serv_ring_mgt *ring_mgt,
struct net_device *netdev, struct device *dev)
{
@@ -104,6 +244,297 @@ static void nbl_serv_remove_vectors(struct nbl_serv_ring_mgt *ring_mgt,
ring_mgt->vectors = NULL;
}
+int nbl_serv_vsi_open(void *priv, struct net_device *netdev, u16 vsi_index,
+ u16 real_qps, bool use_napi)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ struct nbl_serv_ring_vsi_info *vsi_info =
+ &ring_mgt->vsi_info[vsi_index];
+ int ret = 0;
+
+ if (vsi_info->started)
+ return 0;
+
+ ret = nbl_serv_setup_rings(serv_mgt, netdev, vsi_info, use_napi);
+ if (ret) {
+ netdev_err(netdev, "Fail to setup rings\n");
+ goto setup_rings_fail;
+ }
+
+ ret = nbl_serv_setup_queues(serv_mgt, vsi_info);
+ if (ret) {
+ netdev_err(netdev, "Fail to setup queues\n");
+ goto setup_queue_fail;
+ }
+ nbl_serv_flush_rx_queues(serv_mgt, vsi_info->ring_offset,
+ vsi_info->ring_num);
+ ret = disp_ops->cfg_dsch(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_info->vsi_id, true);
+ if (ret) {
+ netdev_err(netdev, "Fail to setup dsch\n");
+ goto setup_dsch_fail;
+ }
+
+ vsi_info->active_ring_num = real_qps;
+ ret = disp_ops->setup_cqs(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_info->vsi_id, real_qps, false);
+ if (ret)
+ goto setup_cqs_fail;
+
+ vsi_info->started = true;
+ return 0;
+
+setup_cqs_fail:
+ disp_ops->cfg_dsch(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ NBL_COMMON_TO_VSI_ID(common), false);
+setup_dsch_fail:
+ disp_ops->remove_all_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ NBL_COMMON_TO_VSI_ID(common));
+setup_queue_fail:
+ nbl_serv_stop_rings(serv_mgt, vsi_info);
+setup_rings_fail:
+ return ret;
+}
+
+int nbl_serv_vsi_stop(void *priv, u16 vsi_index)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ struct nbl_serv_ring_vsi_info *vsi_info =
+ &ring_mgt->vsi_info[vsi_index];
+
+ if (!vsi_info->started)
+ return 0;
+
+ vsi_info->started = false;
+ /* modify defalt action and rss configuration */
+ disp_ops->remove_cqs(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_info->vsi_id);
+
+ /* clear dsch config */
+ disp_ops->cfg_dsch(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_info->vsi_id, false);
+
+ /* disable and rest tx/rx logic queue */
+ disp_ops->remove_all_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_info->vsi_id);
+
+ /* free tx and rx bufs */
+ nbl_serv_stop_rings(serv_mgt, vsi_info);
+
+ return 0;
+}
+
+int nbl_serv_netdev_open(struct net_device *netdev)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter);
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_serv_ring_vsi_info *vsi_info;
+ int num_cpus, real_qps, ret = 0;
+
+ if (!test_bit(NBL_DOWN, adapter->state))
+ return -EBUSY;
+
+ netdev_dbg(netdev, "Nbl open\n");
+
+ netif_carrier_off(netdev);
+
+ vsi_info = &ring_mgt->vsi_info[NBL_VSI_DATA];
+
+ if (vsi_info->active_ring_num) {
+ real_qps = vsi_info->active_ring_num;
+ } else {
+ num_cpus = num_online_cpus();
+ real_qps = num_cpus > vsi_info->ring_num ? vsi_info->ring_num :
+ num_cpus;
+ }
+
+ ret = nbl_serv_vsi_open(serv_mgt, netdev, NBL_VSI_DATA, real_qps, 1);
+ if (ret)
+ goto vsi_open_fail;
+
+ ret = netif_set_real_num_tx_queues(netdev, real_qps);
+ if (ret)
+ goto setup_real_qps_fail;
+ ret = netif_set_real_num_rx_queues(netdev, real_qps);
+ if (ret)
+ goto setup_real_qps_fail;
+
+ netif_tx_start_all_queues(netdev);
+ clear_bit(NBL_DOWN, adapter->state);
+ set_bit(NBL_RUNNING, adapter->state);
+ netif_carrier_on(netdev);
+ netdev_dbg(netdev, "Nbl open ok!\n");
+ nbl_serv_set_promisc_mode((void *)serv_mgt, vsi_info->vsi_id, 1);
+ return 0;
+
+setup_real_qps_fail:
+ nbl_serv_vsi_stop(serv_mgt, NBL_VSI_DATA);
+vsi_open_fail:
+ return ret;
+}
+
+int nbl_serv_netdev_stop(struct net_device *netdev)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter);
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_serv_ring_vsi_info *vsi_info;
+
+ vsi_info = &ring_mgt->vsi_info[NBL_VSI_DATA];
+ if (!test_bit(NBL_RUNNING, adapter->state))
+ return -EBUSY;
+
+ netdev_dbg(netdev, "Nbl stop\n");
+ set_bit(NBL_DOWN, adapter->state);
+ clear_bit(NBL_RUNNING, adapter->state);
+
+ netif_tx_stop_all_queues(netdev);
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+ synchronize_net();
+ nbl_serv_vsi_stop(serv_mgt, NBL_VSI_DATA);
+ nbl_serv_set_promisc_mode((void *)serv_mgt, vsi_info->vsi_id, 0);
+ netdev_dbg(netdev, "Nbl stop ok!\n");
+
+ return 0;
+}
+
+static void nbl_serv_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev);
+ struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ struct nbl_stats net_stats = { 0 };
+
+ if (!stats) {
+ netdev_err(netdev, "get_link_stats64 stats is null\n");
+ return;
+ }
+
+ disp_ops->get_net_stats(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ &net_stats);
+
+ stats->rx_packets = net_stats.rx_packets;
+ stats->tx_packets = net_stats.tx_packets;
+ stats->rx_bytes = net_stats.rx_bytes;
+ stats->tx_bytes = net_stats.tx_bytes;
+ stats->multicast = net_stats.rx_multicast_packets;
+
+ stats->rx_errors = 0;
+ stats->tx_errors = 0;
+ stats->rx_length_errors = netdev->stats.rx_length_errors;
+ stats->rx_crc_errors = netdev->stats.rx_crc_errors;
+ stats->rx_frame_errors = netdev->stats.rx_frame_errors;
+ stats->rx_dropped = 0;
+ stats->tx_dropped = 0;
+}
+
+static netdev_features_t nbl_serv_features_check(struct sk_buff *skb,
+ struct net_device *dev,
+ netdev_features_t features)
+{
+ u32 l2_l3_hrd_len = 0, l4_hrd_len = 0, total_hrd_len = 0;
+ u8 l4_proto = 0;
+ __be16 protocol, frag_off;
+ unsigned char *exthdr;
+ int ret;
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ struct udphdr *udp;
+ unsigned char *hdr;
+ } l4;
+
+ /* No point in doing any of this if neither checksum nor GSO are
+ * being requested for this frame. We can rule out both by just
+ * checking for CHECKSUM_PARTIAL.
+ */
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return features;
+
+ /* We cannot support GSO if the MSS is going to be less than
+ * 256 bytes or bigger than 16383 bytes. If it is then we need
+ *to drop support for GSO.
+ */
+ if (skb_is_gso(skb) &&
+ (skb_shinfo(skb)->gso_size < NBL_TX_TSO_MSS_MIN ||
+ skb_shinfo(skb)->gso_size > NBL_TX_TSO_MSS_MAX))
+ features &= ~NETIF_F_GSO_MASK;
+
+ l2_l3_hrd_len = (u32)(skb_transport_header(skb) - skb->data);
+
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+ protocol = vlan_get_protocol(skb);
+
+ if (protocol == htons(ETH_P_IP)) {
+ l4_proto = ip.v4->protocol;
+ } else if (protocol == htons(ETH_P_IPV6)) {
+ exthdr = ip.hdr + sizeof(*ip.v6);
+ l4_proto = ip.v6->nexthdr;
+ if (l4.hdr != exthdr) {
+ ret = ipv6_skip_exthdr(skb, exthdr - skb->data,
+ &l4_proto, &frag_off);
+ if (ret < 0)
+ goto out_rm_features;
+ }
+ } else {
+ goto out_rm_features;
+ }
+
+ switch (l4_proto) {
+ case IPPROTO_TCP:
+ l4_hrd_len = (l4.tcp->doff) * 4;
+ break;
+ case IPPROTO_UDP:
+ l4_hrd_len = sizeof(struct udphdr);
+ break;
+ case IPPROTO_SCTP:
+ l4_hrd_len = sizeof(struct sctphdr);
+ break;
+ default:
+ goto out_rm_features;
+ }
+
+ total_hrd_len = l2_l3_hrd_len + l4_hrd_len;
+
+ // TX checksum offload support total header len is [0, 255]
+ if (total_hrd_len > NBL_TX_CHECKSUM_OFFLOAD_L2L3L4_HDR_LEN_MAX)
+ goto out_rm_features;
+
+ // TSO support total header len is [42, 128]
+ if (total_hrd_len < NBL_TX_TSO_L2L3L4_HDR_LEN_MIN ||
+ total_hrd_len > NBL_TX_TSO_L2L3L4_HDR_LEN_MAX)
+ features &= ~NETIF_F_GSO_MASK;
+
+ if (skb->encapsulation)
+ goto out_rm_features;
+
+ return features;
+
+out_rm_features:
+ return features & ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ NETIF_F_SCTP_CRC | NETIF_F_GSO_MASK);
+}
+
+static u16
+nbl_serv_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ struct net_device *sb_dev)
+{
+ return netdev_pick_tx(netdev, skb, sb_dev);
+}
+
static int
nbl_serv_register_net(void *priv, struct nbl_register_net_param *register_param,
struct nbl_register_net_result *register_result)
@@ -345,6 +776,23 @@ static void nbl_serv_set_mask_en(void *priv, bool enable)
ring_mgt->net_msix_mask_en = enable;
}
+static int nbl_serv_set_promisc_mode(void *priv, u16 vsi_id, u16 mode)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->set_promisc_mode(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ vsi_id, mode);
+}
+
+static u32 nbl_serv_get_tx_headroom(void *priv)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->get_tx_headroom(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
+}
+
static int nbl_serv_init_chip(void *priv)
{
struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
@@ -436,6 +884,95 @@ static int nbl_serv_enable_mailbox_irq(void *priv, u16 vector_id,
return 0;
}
+static irqreturn_t nbl_serv_clean_rings(int __always_unused irq, void *data)
+{
+ struct nbl_serv_vector *vector = (struct nbl_serv_vector *)data;
+
+ napi_schedule_irqoff(&vector->nbl_napi->napi);
+
+ return IRQ_HANDLED;
+}
+
+static int nbl_serv_request_net_irq(void *priv,
+ struct nbl_msix_info_param *msix_info)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt);
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct nbl_serv_net_resource_mgt *net_resource_mgt =
+ NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt);
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct nbl_serv_ring *tx_ring, *rx_ring;
+ struct nbl_serv_vector *vector;
+ u32 irq_num;
+ int i, ret = 0;
+
+ for (i = 0; i < ring_mgt->tx_ring_num; i++) {
+ tx_ring = &ring_mgt->tx_rings[i];
+ rx_ring = &ring_mgt->rx_rings[i];
+ vector = &ring_mgt->vectors[i];
+ vector->tx_ring = tx_ring;
+ vector->rx_ring = rx_ring;
+
+ irq_num = msix_info->msix_entries[i].vector;
+ snprintf(vector->name, sizeof(vector->name),
+ "nbl_txrx%d@pci:%s", i,
+ pci_name(NBL_COMMON_TO_PDEV(common)));
+ ret = devm_request_irq(dev, irq_num, nbl_serv_clean_rings, 0,
+ vector->name, vector);
+ if (ret) {
+ nbl_err(common, "TxRx Queue %u req irq with error %d",
+ i, ret);
+ goto request_irq_err;
+ }
+ }
+
+ net_resource_mgt->num_net_msix = msix_info->msix_num;
+
+ return 0;
+
+request_irq_err:
+ while (--i + 1) {
+ vector = &ring_mgt->vectors[i];
+
+ irq_num = msix_info->msix_entries[i].vector;
+ irq_set_affinity_hint(irq_num, NULL);
+ devm_free_irq(dev, irq_num, vector);
+ }
+ return ret;
+}
+
+static void nbl_serv_free_net_irq(void *priv,
+ struct nbl_msix_info_param *msix_info)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt);
+ struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt);
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct nbl_serv_vector *vector;
+ u32 irq_num;
+ int i;
+
+ for (i = 0; i < ring_mgt->tx_ring_num; i++) {
+ vector = &ring_mgt->vectors[i];
+
+ irq_num = msix_info->msix_entries[i].vector;
+ irq_set_affinity_hint(irq_num, NULL);
+ devm_free_irq(dev, irq_num, vector);
+ }
+}
+
+static u16 nbl_serv_get_global_vector(void *priv, u16 local_vec_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->get_global_vector(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ NBL_COMMON_TO_VSI_ID(common),
+ local_vec_id);
+}
+
static u16 nbl_serv_get_msix_entry_id(void *priv, u16 local_vec_id)
{
struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
@@ -481,6 +1018,47 @@ static void nbl_serv_set_netdev_ops(void *priv,
(void *)net_device_ops;
}
+static void nbl_serv_remove_net_resource_mgt(void *priv)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_serv_net_resource_mgt *net_mgt;
+ struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt);
+ struct device *dev;
+
+ net_mgt = NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt);
+ dev = NBL_COMMON_TO_DEV(common);
+
+ if (net_mgt) {
+ devm_kfree(dev, net_mgt);
+ NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt) = NULL;
+ }
+}
+
+static int nbl_serv_setup_net_resource_mgt(void *priv,
+ struct net_device *netdev,
+ u16 vlan_proto, u16 vlan_tci,
+ u32 rate)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt);
+ struct device *dev = NBL_COMMON_TO_DEV(common);
+ struct nbl_serv_net_resource_mgt *net_resource_mgt;
+ int size = sizeof(struct nbl_serv_net_resource_mgt);
+
+ net_resource_mgt = devm_kzalloc(dev, size, GFP_KERNEL);
+ if (!net_resource_mgt)
+ return -ENOMEM;
+
+ net_resource_mgt->netdev = netdev;
+ net_resource_mgt->serv_mgt = serv_mgt;
+ net_resource_mgt->vlan_proto = vlan_proto;
+ net_resource_mgt->vlan_tci = vlan_tci;
+ net_resource_mgt->max_tx_rate = rate;
+ NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt) = net_resource_mgt;
+
+ return 0;
+}
+
static int nbl_serv_register_vsi_info(void *priv,
struct nbl_vsi_param *vsi_param)
{
@@ -529,6 +1107,9 @@ static struct nbl_service_ops serv_ops = {
.destroy_msix_map = nbl_serv_destroy_msix_map,
.enable_mailbox_irq = nbl_serv_enable_mailbox_irq,
+ .request_net_irq = nbl_serv_request_net_irq,
+ .free_net_irq = nbl_serv_free_net_irq,
+ .get_global_vector = nbl_serv_get_global_vector,
.get_msix_entry_id = nbl_serv_get_msix_entry_id,
.register_net = nbl_serv_register_net,
@@ -549,9 +1130,23 @@ static struct nbl_service_ops serv_ops = {
.disable_napis = nbl_serv_disable_napis,
.set_mask_en = nbl_serv_set_mask_en,
+ .set_promisc_mode = nbl_serv_set_promisc_mode,
+ .get_tx_headroom = nbl_serv_get_tx_headroom,
+
+ .vsi_open = nbl_serv_vsi_open,
+ .vsi_stop = nbl_serv_vsi_stop,
+ /* For netdev ops */
+ .netdev_open = nbl_serv_netdev_open,
+ .netdev_stop = nbl_serv_netdev_stop,
+ .get_stats64 = nbl_serv_get_stats64,
+ .features_check = nbl_serv_features_check,
+ .select_queue = nbl_serv_select_queue,
+
.set_netdev_ops = nbl_serv_set_netdev_ops,
.get_vsi_id = nbl_serv_get_vsi_id,
.get_eth_id = nbl_serv_get_eth_id,
+ .setup_net_resource_mgt = nbl_serv_setup_net_resource_mgt,
+ .remove_net_resource_mgt = nbl_serv_remove_net_resource_mgt,
};
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
index 273218200526..49d7fea5a7cc 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
@@ -123,4 +123,10 @@ struct nbl_service_mgt {
struct nbl_serv_net_resource_mgt *net_resource_mgt;
};
+int nbl_serv_netdev_open(struct net_device *netdev);
+int nbl_serv_netdev_stop(struct net_device *netdev);
+int nbl_serv_vsi_open(void *priv, struct net_device *netdev, u16 vsi_index,
+ u16 real_qps, bool use_napi);
+int nbl_serv_vsi_stop(void *priv, u16 vsi_index);
+
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
index fe176ee72128..6b2bb6d15fdf 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
@@ -161,4 +161,5 @@ int nbl_common_alloc_hash_node(void *priv, void *key, void *data,
void **out_data);
void *nbl_common_get_hash_node(void *priv, void *key);
void nbl_common_free_hash_node(void *priv, void *key);
+
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
index 29b541649a90..b6eab6a621dc 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
@@ -50,7 +50,6 @@ struct nbl_service_ops {
int (*setup_rss_indir)(void *priv, u16 vsi_id);
int (*alloc_rings)(void *priv, struct net_device *dev,
struct nbl_ring_param *param);
- void (*cpu_affinity_init)(void *priv, u16 rings_num);
void (*free_rings)(void *priv);
int (*enable_napis)(void *priv, u16 vsi_index);
void (*disable_napis)(void *priv, u16 vsi_index);
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index 0ab4dd981c8a..77d98f15a6c8 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -29,6 +29,8 @@
#define NBL_MAX_FUNC (520)
+#define SET_DEV_MIN_MTU(netdev, mtu) ((netdev)->min_mtu = (mtu))
+#define SET_DEV_MAX_MTU(netdev, mtu) ((netdev)->max_mtu = (mtu))
/* Used for macros to pass checkpatch */
#define NBL_NAME(x) x
@@ -255,7 +257,48 @@ enum {
NBL_FEATURES_COUNT
};
+static const netdev_features_t nbl_netdev_features[] = {
+ [NBL_NETIF_F_SG_BIT] = NETIF_F_SG,
+ [NBL_NETIF_F_IP_CSUM_BIT] = NETIF_F_IP_CSUM,
+ [NBL_NETIF_F_IPV6_CSUM_BIT] = NETIF_F_IPV6_CSUM,
+ [NBL_NETIF_F_HIGHDMA_BIT] = NETIF_F_HIGHDMA,
+ [NBL_NETIF_F_HW_VLAN_CTAG_TX_BIT] = NETIF_F_HW_VLAN_CTAG_TX,
+ [NBL_NETIF_F_HW_VLAN_CTAG_RX_BIT] = NETIF_F_HW_VLAN_CTAG_RX,
+ [NBL_NETIF_F_HW_VLAN_CTAG_FILTER_BIT] = NETIF_F_HW_VLAN_CTAG_FILTER,
+ [NBL_NETIF_F_TSO_BIT] = NETIF_F_TSO,
+ [NBL_NETIF_F_GSO_ROBUST_BIT] = NETIF_F_GSO_ROBUST,
+ [NBL_NETIF_F_TSO6_BIT] = NETIF_F_TSO6,
+ [NBL_NETIF_F_GSO_GRE_BIT] = NETIF_F_GSO_GRE,
+ [NBL_NETIF_F_GSO_GRE_CSUM_BIT] = NETIF_F_GSO_GRE_CSUM,
+ [NBL_NETIF_F_GSO_UDP_TUNNEL_BIT] = NETIF_F_GSO_UDP_TUNNEL,
+ [NBL_NETIF_F_GSO_UDP_TUNNEL_CSUM_BIT] = NETIF_F_GSO_UDP_TUNNEL_CSUM,
+ [NBL_NETIF_F_GSO_PARTIAL_BIT] = NETIF_F_GSO_PARTIAL,
+ [NBL_NETIF_F_GSO_UDP_L4_BIT] = NETIF_F_GSO_UDP_L4,
+ [NBL_NETIF_F_SCTP_CRC_BIT] = NETIF_F_SCTP_CRC,
+ [NBL_NETIF_F_NTUPLE_BIT] = NETIF_F_NTUPLE,
+ [NBL_NETIF_F_RXHASH_BIT] = NETIF_F_RXHASH,
+ [NBL_NETIF_F_RXCSUM_BIT] = NETIF_F_RXCSUM,
+ [NBL_NETIF_F_HW_VLAN_STAG_TX_BIT] = NETIF_F_HW_VLAN_STAG_TX,
+ [NBL_NETIF_F_HW_VLAN_STAG_RX_BIT] = NETIF_F_HW_VLAN_STAG_RX,
+ [NBL_NETIF_F_HW_VLAN_STAG_FILTER_BIT] = NETIF_F_HW_VLAN_STAG_FILTER,
+ [NBL_NETIF_F_HW_TC_BIT] = NETIF_F_HW_TC,
+};
+
#define NBL_FEATURE(name) (1 << (NBL_##name##_BIT))
+#define NBL_FEATURE_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1)
+
+static inline netdev_features_t nbl_features_to_netdev_features(u64 features)
+{
+ netdev_features_t netdev_features = 0;
+ int i = 0;
+
+ for (i = 0; i < NBL_FEATURES_COUNT; i++) {
+ if (NBL_FEATURE_TEST_BIT(features, i))
+ netdev_features += nbl_netdev_features[i];
+ }
+
+ return netdev_features;
+};
enum nbl_performance_mode {
NBL_QUIRKS_NO_TOE,
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
2026-01-23 1:17 ` [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang
@ 2026-01-23 1:31 ` Randy Dunlap
2026-01-23 3:27 ` Andrew Lunn
2026-01-27 18:10 ` [PATCH " Simon Horman
2 siblings, 0 replies; 31+ messages in thread
From: Randy Dunlap @ 2026-01-23 1:31 UTC (permalink / raw)
To: illusion.wang, dimon.zhao, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
Hi--
On 1/22/26 5:17 PM, illusion.wang wrote:
> diff --git a/drivers/net/ethernet/nebula-matrix/Kconfig b/drivers/net/ethernet/nebula-matrix/Kconfig
> new file mode 100644
> index 000000000000..14c27c63d7ec
> --- /dev/null
> +++ b/drivers/net/ethernet/nebula-matrix/Kconfig
> @@ -0,0 +1,37 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Nebula-matrix network device configuration
> +#
> +
> +config NET_VENDOR_NEBULA_MATRIX
> + bool "Nebula-matrix devices"
> + default y
> + help
> + If you have a network (Ethernet) card belonging to this class, say Y.
> +
> + Note that the answer to this question doesn't directly affect the
> + kernel: saying N will just cause the configurator to skip all
> + the questions about Nebula-matrix cards. If you say Y, you will be
> + asked for your specific card in the following questions.
> +
> +if NET_VENDOR_NEBULA_MATRIX
> +
> +config NBL_CORE
> + tristate "Nebula-matrix Ethernet Controller m18100 Family support"
> + depends on 64BIT && PCI
> + default m
> + select PAGE_POOL
> + help
> + This driver supports Nebula-matrix Ethernet Controller m18100 Family of
> + devices. For more information about this product, go to the product
> + description with smart NIC:
> +
> + <http://www.nebula-matrix.com>
> +
> + More specific information on configuring the driver is in
> + <file:Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst>.
> +
> + To compile this driver as a module, choose M here. The module
> + will be called nbl_core.
> +
> +endif # NET_VENDOR_NEBULA_MATRIX
There are lots of spaces used here for indentation.
Please see Documentation/process/coding-style.rst, section
10) Kconfig configuration files, for the correct indentation style
for Kconfig files.
--
~Randy
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v3 net-next 12/15] net/nebula-matrix: add common/ctrl dev init/reinit operation
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (13 preceding siblings ...)
2026-01-23 1:17 ` [PATCH v3 net-next 15/15] net/nebula-matrix: add net " illusion.wang
@ 2026-01-23 3:15 ` illusion.wang
2026-01-27 18:06 ` [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs Simon Horman
15 siblings, 0 replies; 31+ messages in thread
From: illusion.wang @ 2026-01-23 3:15 UTC (permalink / raw)
To: netdev; +Cc: illusion.wang, open list
Common Device Setup: nbl_dev_setup_common_dev configures mailbox queues,
registers cleanup tasks, and requests IRQs.
Control Device Setup (optional): nbl_dev_setup_ctrl_dev initializes
the chip and configures all channel queues.
Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com>
---
.../nebula-matrix/nbl/nbl_core/nbl_dev.c | 194 +++++++++++++++++-
.../nebula-matrix/nbl/nbl_core/nbl_dev.h | 92 +++++++++
.../nebula-matrix/nbl/nbl_core/nbl_service.c | 69 +++++++
.../nebula-matrix/nbl/nbl_core/nbl_service.h | 91 +++++++-
.../nbl/nbl_include/nbl_def_service.h | 57 +++++
.../nbl/nbl_include/nbl_include.h | 7 +
6 files changed, 506 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
index dbeffcfe397e..b661d8ab972c 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
@@ -11,6 +11,179 @@
#include "nbl_dev.h"
static struct nbl_dev_ops dev_ops;
+static void nbl_dev_register_common_irq(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+ struct nbl_msix_info *msix_info =
+ NBL_DEV_COMMON_TO_MSIX_INFO(dev_common);
+
+ msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num = 1;
+}
+
+/* ---------- Channel config ---------- */
+static int nbl_dev_setup_chan_qinfo(struct nbl_dev_mgt *dev_mgt, u8 chan_type)
+{
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt);
+ void *priv = NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt);
+ int ret = 0;
+
+ if (!chan_ops->check_queue_exist(priv, chan_type))
+ return 0;
+
+ ret = chan_ops->cfg_chan_qinfo_map_table(priv, chan_type);
+ if (ret)
+ dev_err(dev, "setup chan:%d, qinfo map table failed\n",
+ chan_type);
+
+ return ret;
+}
+
+static int nbl_dev_setup_chan_queue(struct nbl_dev_mgt *dev_mgt, u8 chan_type)
+{
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ void *priv = NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt);
+ int ret = 0;
+
+ if (chan_ops->check_queue_exist(priv, chan_type))
+ ret = chan_ops->setup_queue(priv, chan_type);
+
+ return ret;
+}
+
+static int nbl_dev_remove_chan_queue(struct nbl_dev_mgt *dev_mgt, u8 chan_type)
+{
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ void *priv = NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt);
+ int ret = 0;
+
+ if (chan_ops->check_queue_exist(priv, chan_type))
+ ret = chan_ops->teardown_queue(priv, chan_type);
+
+ return ret;
+}
+
+static void nbl_dev_register_chan_task(struct nbl_dev_mgt *dev_mgt,
+ u8 chan_type, struct work_struct *task)
+{
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+
+ if (chan_ops->check_queue_exist(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt),
+ chan_type))
+ chan_ops->register_chan_task(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt),
+ chan_type, task);
+}
+
+/* ---------- Tasks config ---------- */
+static void nbl_dev_clean_mailbox_task(struct work_struct *work)
+{
+ struct nbl_dev_common *common_dev =
+ container_of(work, struct nbl_dev_common, clean_mbx_task);
+ struct nbl_dev_mgt *dev_mgt = common_dev->dev_mgt;
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+
+ chan_ops->clean_queue_subtask(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt),
+ NBL_CHAN_TYPE_MAILBOX);
+}
+
+/* ---------- Dev init process ---------- */
+static int nbl_dev_setup_common_dev(struct nbl_adapter *adapter,
+ struct nbl_init_param *param)
+{
+ struct nbl_dev_mgt *dev_mgt =
+ (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_dev_common *common_dev;
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt);
+
+ common_dev = devm_kzalloc(NBL_ADAP_TO_DEV(adapter),
+ sizeof(struct nbl_dev_common), GFP_KERNEL);
+ if (!common_dev)
+ return -ENOMEM;
+ common_dev->dev_mgt = dev_mgt;
+
+ if (nbl_dev_setup_chan_queue(dev_mgt, NBL_CHAN_TYPE_MAILBOX))
+ goto setup_chan_fail;
+
+ nbl_common_alloc_task(&common_dev->clean_mbx_task,
+ nbl_dev_clean_mailbox_task);
+
+ common->vsi_id = serv_ops->get_vsi_id(priv, 0, NBL_VSI_DATA);
+
+ serv_ops->get_eth_id(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt),
+ NBL_COMMON_TO_VSI_ID(common),
+ &NBL_COMMON_TO_ETH_MODE(common),
+ &NBL_COMMON_TO_ETH_ID(common),
+ &NBL_COMMON_TO_LOGIC_ETH_ID(common));
+
+ nbl_dev_register_chan_task(dev_mgt, NBL_CHAN_TYPE_MAILBOX,
+ &common_dev->clean_mbx_task);
+
+ dev_mgt->common_dev = common_dev;
+ nbl_dev_register_common_irq(dev_mgt);
+ return 0;
+
+setup_chan_fail:
+ devm_kfree(NBL_ADAP_TO_DEV(adapter), common_dev);
+ return -EFAULT;
+}
+
+static void nbl_dev_remove_common_dev(struct nbl_adapter *adapter)
+{
+ struct nbl_dev_mgt *dev_mgt =
+ (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_dev_common *common_dev = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt);
+
+ if (!common_dev)
+ return;
+
+ nbl_dev_register_chan_task(dev_mgt, NBL_CHAN_TYPE_MAILBOX, NULL);
+ nbl_common_release_task(&common_dev->clean_mbx_task);
+ nbl_dev_remove_chan_queue(dev_mgt, NBL_CHAN_TYPE_MAILBOX);
+ devm_kfree(NBL_ADAP_TO_DEV(adapter), common_dev);
+ dev_mgt->common_dev = NULL;
+}
+
+static int nbl_dev_setup_ctrl_dev(struct nbl_adapter *adapter,
+ struct nbl_init_param *param)
+{
+ struct nbl_dev_mgt *dev_mgt =
+ (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+ struct device *dev = NBL_ADAP_TO_DEV(adapter);
+ int i, ret = 0;
+
+ ret = serv_ops->init_chip(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+ if (ret) {
+ dev_err(dev, "ctrl dev chip_init failed\n");
+ goto chip_init_fail;
+ }
+
+ for (i = 0; i < NBL_CHAN_TYPE_MAX; i++) {
+ ret = nbl_dev_setup_chan_qinfo(dev_mgt, i);
+ if (ret) {
+ dev_err(dev, "ctrl dev setup chan qinfo failed\n");
+ goto setup_chan_q_fail;
+ }
+ }
+
+ return 0;
+
+setup_chan_q_fail:
+ serv_ops->destroy_chip(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+chip_init_fail:
+ return ret;
+}
+
+static void nbl_dev_remove_ctrl_dev(struct nbl_adapter *adapter)
+{
+ struct nbl_dev_mgt *dev_mgt =
+ (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter);
+ struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt);
+
+ serv_ops->destroy_chip(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt));
+}
static int nbl_dev_setup_dev_mgt(struct nbl_common_info *common,
struct nbl_dev_mgt **dev_mgt)
@@ -74,14 +247,27 @@ int nbl_dev_init(void *p, struct nbl_init_param *param)
(*dev_mgt)->serv_ops_tbl = serv_ops_tbl;
(*dev_mgt)->chan_ops_tbl = chan_ops_tbl;
+
+ ret = nbl_dev_setup_common_dev(adapter, param);
+ if (ret)
+ goto setup_common_dev_fail;
+
+ if (common->is_ctrl) {
+ ret = nbl_dev_setup_ctrl_dev(adapter, param);
+ if (ret)
+ goto setup_ctrl_dev_fail;
+ }
ret = nbl_dev_setup_ops(dev, dev_ops_tbl, adapter);
if (ret)
goto setup_ops_fail;
-
return 0;
setup_ops_fail:
-
+ if (common->is_ctrl)
+ nbl_dev_remove_ctrl_dev(adapter);
+setup_ctrl_dev_fail:
+ nbl_dev_remove_common_dev(adapter);
+setup_common_dev_fail:
nbl_dev_remove_dev_mgt(common, dev_mgt);
setup_mgt_fail:
return ret;
@@ -98,7 +284,9 @@ void nbl_dev_remove(void *p)
&NBL_ADAP_TO_DEV_OPS_TBL(adapter);
nbl_dev_remove_ops(dev, dev_ops_tbl);
-
+ if (common->is_ctrl)
+ nbl_dev_remove_ctrl_dev(adapter);
+ nbl_dev_remove_common_dev(adapter);
nbl_dev_remove_dev_mgt(common, dev_mgt);
}
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h
index 8a7c85db8ceb..ebcf33eead7d 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h
@@ -29,11 +29,103 @@
(NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->ops)
#define NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt) \
(NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->priv)
+
+#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | \
+ NETIF_MSG_LINK | NETIF_MSG_IFDOWN | \
+ NETIF_MSG_IFUP)
+
+#define NBL_STRING_NAME_LEN 32
+#define NBL_DEFAULT_MTU 1500
+
+enum nbl_msix_serv_type {
+ /* virtio_dev has a config vector_id, and the vector_id need is 0 */
+ NBL_MSIX_VIRTIO_TYPE = 0,
+ NBL_MSIX_NET_TYPE,
+ NBL_MSIX_MAILBOX_TYPE,
+ NBL_MSIX_TYPE_MAX
+};
+
+struct nbl_msix_serv_info {
+ char irq_name[NBL_STRING_NAME_LEN];
+ u16 num;
+ u16 base_vector_id;
+ /* true: hw report msix, hw need to mask actively */
+ bool hw_self_mask_en;
+};
+
+struct nbl_msix_info {
+ struct nbl_msix_serv_info serv_info[NBL_MSIX_TYPE_MAX];
+ struct msix_entry *msix_entries;
+};
+
+struct nbl_dev_common {
+ struct nbl_dev_mgt *dev_mgt;
+ struct nbl_msix_info msix_info;
+ char mailbox_name[NBL_STRING_NAME_LEN];
+ // for ctrl-dev/net-dev mailbox recv msg
+ struct work_struct clean_mbx_task;
+
+};
+
+struct nbl_dev_vsi_controller {
+ u16 queue_num;
+ u16 queue_free_offset;
+ void *vsi_list[NBL_VSI_MAX];
+};
+
+struct nbl_dev_net_ops {
+ int (*setup_netdev_ops)(void *priv, struct net_device *netdev,
+ struct nbl_init_param *param);
+};
+
+struct nbl_dev_net {
+ struct net_device *netdev;
+ struct nbl_dev_net_ops *ops;
+ u8 eth_id;
+ struct nbl_dev_vsi_controller vsi_ctrl;
+ u16 kernel_queue_num;
+};
+
struct nbl_dev_mgt {
struct nbl_common_info *common;
struct nbl_service_ops_tbl *serv_ops_tbl;
struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_dev_common *common_dev;
+ struct nbl_dev_net *net_dev;
+};
+
+struct nbl_dev_vsi_ops {
+ int (*register_vsi)(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param, void *vsi_data);
+ int (*setup)(struct nbl_dev_mgt *dev_mgt, struct nbl_init_param *param,
+ void *vsi_data);
+ void (*remove)(struct nbl_dev_mgt *dev_mgt, void *vsi_data);
+ int (*start)(void *dev_priv, struct net_device *netdev, void *vsi_data);
+ void (*stop)(void *dev_priv, void *vsi_data);
+ int (*netdev_build)(struct nbl_dev_mgt *dev_mgt,
+ struct nbl_init_param *param,
+ struct net_device *netdev, void *vsi_data);
+ void (*netdev_destroy)(struct nbl_dev_mgt *dev_mgt, void *vsi_data);
+};
+
+struct nbl_dev_vsi {
+ struct nbl_dev_vsi_ops *ops;
+ struct net_device *netdev;
+ struct net_device *napi_netdev;
+ struct nbl_register_net_result register_result;
+ u16 vsi_id;
+ u16 queue_offset;
+ u16 queue_num;
+ u16 queue_size;
+ u8 index;
+ bool enable;
+
+};
+struct nbl_dev_vsi_tbl {
+ struct nbl_dev_vsi_ops vsi_ops;
};
+struct nbl_dev_vsi *nbl_dev_vsi_select(struct nbl_dev_mgt *dev_mgt,
+ u8 vsi_index);
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
index a0adfb95382c..1c2f8d7e4272 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
@@ -12,7 +12,72 @@
#include <linux/if_vlan.h>
#include "nbl_service.h"
+
+static int nbl_serv_init_chip(void *priv)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops;
+ struct nbl_common_info *common;
+ struct device *dev;
+ int ret = 0;
+
+ common = NBL_SERV_MGT_TO_COMMON(serv_mgt);
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+ dev = NBL_COMMON_TO_DEV(common);
+
+ ret = disp_ops->init_chip_module(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
+ if (ret) {
+ dev_err(dev, "init_chip_module failed\n");
+ goto module_init_fail;
+ }
+
+ ret = disp_ops->queue_init(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
+ if (ret) {
+ dev_err(dev, "queue_init failed\n");
+ goto queue_init_fail;
+ }
+ return 0;
+queue_init_fail:
+module_init_fail:
+ return ret;
+}
+
+static int nbl_serv_destroy_chip(void *p)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)p;
+ struct nbl_dispatch_ops *disp_ops;
+
+ disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ disp_ops->deinit_chip_module(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt));
+ return 0;
+}
+
+static u16 nbl_serv_get_vsi_id(void *priv, u16 func_id, u16 type)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->get_vsi_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt),
+ func_id, type);
+}
+
+static void nbl_serv_get_eth_id(void *priv, u16 vsi_id, u8 *eth_mode,
+ u8 *eth_id, u8 *logic_eth_id)
+{
+ struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv;
+ struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt);
+
+ return disp_ops->get_eth_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id,
+ eth_mode, eth_id, logic_eth_id);
+}
+
static struct nbl_service_ops serv_ops = {
+ .init_chip = nbl_serv_init_chip,
+ .destroy_chip = nbl_serv_destroy_chip,
+
+ .get_vsi_id = nbl_serv_get_vsi_id,
+ .get_eth_id = nbl_serv_get_eth_id,
};
/* Structure starts here, adding an op should not modify anything below */
@@ -71,6 +136,7 @@ int nbl_serv_init(void *p, struct nbl_init_param *param)
struct nbl_service_mgt **serv_mgt;
struct nbl_service_ops_tbl **serv_ops_tbl;
struct nbl_dispatch_ops_tbl *disp_ops_tbl;
+ struct nbl_dispatch_ops *disp_ops;
struct nbl_channel_ops_tbl *chan_ops_tbl;
int ret = 0;
@@ -80,6 +146,7 @@ int nbl_serv_init(void *p, struct nbl_init_param *param)
serv_ops_tbl = &NBL_ADAP_TO_SERV_OPS_TBL(adapter);
disp_ops_tbl = NBL_ADAP_TO_DISP_OPS_TBL(adapter);
chan_ops_tbl = NBL_ADAP_TO_CHAN_OPS_TBL(adapter);
+ disp_ops = disp_ops_tbl->ops;
ret = nbl_serv_setup_serv_mgt(common, serv_mgt);
if (ret)
@@ -91,6 +158,8 @@ int nbl_serv_init(void *p, struct nbl_init_param *param)
(*serv_mgt)->disp_ops_tbl = disp_ops_tbl;
(*serv_mgt)->chan_ops_tbl = chan_ops_tbl;
+ disp_ops->get_resource_pt_ops(disp_ops_tbl->priv,
+ &(*serv_ops_tbl)->pt_ops);
return 0;
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
index 083350bd50ba..273218200526 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
@@ -12,12 +12,90 @@
#include "nbl_core.h"
#define NBL_SERV_MGT_TO_COMMON(serv_mgt) ((serv_mgt)->common)
+#define NBL_SERV_MGT_TO_DEV(serv_mgt) \
+ NBL_COMMON_TO_DEV(NBL_SERV_MGT_TO_COMMON(serv_mgt))
+#define NBL_NET_RES_MGT_TO_NETDEV(net_res_mgt) ((net_res_mgt)->netdev)
+#define NBL_SERV_MGT_TO_NETDEV(serv_mgt) \
+ NBL_NET_RES_MGT_TO_NETDEV(NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt))
+#define NBL_SERV_MGT_TO_RING_MGT(serv_mgt) (&(serv_mgt)->ring_mgt)
+#define NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt) ((serv_mgt)->net_resource_mgt)
+
+#define NBL_SERV_MGT_TO_DISP_OPS_TBL(serv_mgt) ((serv_mgt)->disp_ops_tbl)
+#define NBL_SERV_MGT_TO_DISP_OPS(serv_mgt) \
+ (NBL_SERV_MGT_TO_DISP_OPS_TBL(serv_mgt)->ops)
+#define NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt) \
+ (NBL_SERV_MGT_TO_DISP_OPS_TBL(serv_mgt)->priv)
+
#define NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt) ((serv_mgt)->chan_ops_tbl)
#define NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt) \
(NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt)->ops)
#define NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt) \
(NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt)->priv)
+#define NBL_DEFAULT_VLAN_ID 0
+#define NBL_HW_STATS_PERIOD_SECONDS 5
+#define NBL_HW_STATS_RX_RATE_THRESHOLD (1000) /* 1k pps */
+
+#define NBL_TX_TSO_MSS_MIN (256)
+#define NBL_TX_TSO_MSS_MAX (16383)
+#define NBL_TX_TSO_L2L3L4_HDR_LEN_MIN (42)
+#define NBL_TX_TSO_L2L3L4_HDR_LEN_MAX (128)
+#define NBL_TX_CHECKSUM_OFFLOAD_L2L3L4_HDR_LEN_MAX (255)
+
+#define SET_DPORT_TYPE_VSI_HOST (0)
+#define SET_DPORT_TYPE_VSI_ECPU (1)
+#define SET_DPORT_TYPE_ETH_LAG (2)
+#define SET_DPORT_TYPE_SP_PORT (3)
+
+/* primary vlan in vlan list */
+#define NBL_NO_TRUST_MAX_VLAN 9
+/* primary mac not in submac list */
+#define NBL_NO_TRUST_MAX_MAC 12
+
+struct nbl_serv_ring {
+ dma_addr_t dma;
+ u16 index;
+ u16 local_queue_id;
+ u16 global_queue_id;
+ bool need_recovery;
+ u32 tx_timeout_count;
+};
+
+struct nbl_serv_vector {
+ char name[32];
+ struct net_device *netdev;
+ struct nbl_napi_struct *nbl_napi;
+ struct nbl_serv_ring *tx_ring;
+ struct nbl_serv_ring *rx_ring;
+ u8 __iomem *irq_enable_base;
+ u32 irq_data;
+ u16 local_vec_id;
+ u16 global_vec_id;
+};
+
+struct nbl_serv_ring_vsi_info {
+ u16 vsi_index;
+ u16 vsi_id;
+ u16 ring_offset;
+ u16 ring_num;
+ u16 active_ring_num;
+ bool itr_dynamic;
+ bool started;
+};
+
+struct nbl_serv_ring_mgt {
+ struct nbl_serv_ring *tx_rings;
+ struct nbl_serv_ring *rx_rings;
+ struct nbl_serv_vector *vectors;
+ struct nbl_serv_ring_vsi_info vsi_info[NBL_VSI_MAX];
+ u16 tx_desc_num;
+ u16 rx_desc_num;
+ u16 tx_ring_num;
+ u16 rx_ring_num;
+ u16 active_ring_num;
+ bool net_msix_mask_en;
+};
+
enum {
NBL_PROMISC = 0,
NBL_ALLMULTI = 1,
@@ -27,11 +105,22 @@ struct nbl_serv_netdev_ops {
void *pf_netdev_ops;
};
+struct nbl_serv_net_resource_mgt {
+ struct nbl_service_mgt *serv_mgt;
+ struct net_device *netdev;
+ struct nbl_serv_netdev_ops netdev_ops;
+ u16 num_net_msix;
+ u16 vlan_tci;
+ u16 vlan_proto;
+ int max_tx_rate;
+};
+
struct nbl_service_mgt {
struct nbl_common_info *common;
struct nbl_dispatch_ops_tbl *disp_ops_tbl;
struct nbl_channel_ops_tbl *chan_ops_tbl;
-
+ struct nbl_serv_ring_mgt ring_mgt;
+ struct nbl_serv_net_resource_mgt *net_resource_mgt;
};
#endif
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
index dc261fda3aa5..29b541649a90 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
@@ -10,6 +10,63 @@
#include "nbl_include.h"
struct nbl_service_ops {
+ int (*init_chip)(void *p);
+ int (*destroy_chip)(void *p);
+ int (*configure_msix_map)(void *p, u16 num_net_msix,
+ u16 num_others_msix, bool net_msix_mask_en);
+ int (*destroy_msix_map)(void *priv);
+ int (*enable_mailbox_irq)(void *p, u16 vector_id, bool enable_msix);
+ int (*request_net_irq)(void *priv,
+ struct nbl_msix_info_param *msix_info);
+ void (*free_net_irq)(void *priv, struct nbl_msix_info_param *msix_info);
+ u16 (*get_global_vector)(void *priv, u16 local_vec_id);
+ u16 (*get_msix_entry_id)(void *priv, u16 local_vec_id);
+
+ int (*vsi_open)(void *priv, struct net_device *netdev, u16 vsi_index,
+ u16 real_qps, bool use_napi);
+ int (*vsi_stop)(void *priv, u16 vsi_index);
+
+ int (*netdev_open)(struct net_device *netdev);
+ int (*netdev_stop)(struct net_device *netdev);
+ void (*get_stats64)(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats);
+ netdev_features_t (*features_check)(struct sk_buff *skb,
+ struct net_device *dev,
+ netdev_features_t features);
+ u16 (*select_queue)(struct net_device *netdev, struct sk_buff *skb,
+ struct net_device *sb_dev);
+ int (*register_net)(void *priv,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result);
+ int (*unregister_net)(void *priv);
+ int (*setup_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num,
+ u16 net_vector_id);
+ void (*remove_txrx_queues)(void *priv, u16 vsi_id);
+ int (*register_vsi_info)(void *priv, struct nbl_vsi_param *vsi_param);
+ int (*setup_q2vsi)(void *priv, u16 vsi_id);
+ void (*remove_q2vsi)(void *priv, u16 vsi_id);
+ int (*setup_rss)(void *priv, u16 vsi_id);
+ void (*remove_rss)(void *priv, u16 vsi_id);
+ int (*setup_rss_indir)(void *priv, u16 vsi_id);
+ int (*alloc_rings)(void *priv, struct net_device *dev,
+ struct nbl_ring_param *param);
+ void (*cpu_affinity_init)(void *priv, u16 rings_num);
+ void (*free_rings)(void *priv);
+ int (*enable_napis)(void *priv, u16 vsi_index);
+ void (*disable_napis)(void *priv, u16 vsi_index);
+ void (*set_mask_en)(void *priv, bool enable);
+ int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode);
+ u32 (*get_tx_headroom)(void *priv);
+ u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type);
+ void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id,
+ u8 *logic_eth_id);
+ int (*setup_net_resource_mgt)(void *priv, struct net_device *dev,
+ u16 vlan_proto, u16 vlan_tci, u32 rate);
+ void (*remove_net_resource_mgt)(void *priv);
+ void (*set_netdev_ops)(void *priv,
+ const struct net_device_ops *net_device_ops,
+ bool is_pf);
+
};
struct nbl_service_ops_tbl {
diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
index 6d57391f85f9..0ab4dd981c8a 100644
--- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
@@ -262,6 +262,13 @@ enum nbl_performance_mode {
NBL_QUIRKS_UVN_PREFETCH_ALIGN,
};
+struct nbl_vsi_param {
+ u16 vsi_id;
+ u16 queue_offset;
+ u16 queue_num;
+ u8 index;
+};
+
struct nbl_ring_param {
u16 tx_ring_num;
u16 rx_ring_num;
--
2.47.3
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
2026-01-23 1:17 ` [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang
2026-01-23 1:31 ` Randy Dunlap
@ 2026-01-23 3:27 ` Andrew Lunn
2026-01-26 3:23 ` 回复:[PATCH " Illusion Wang
[not found] ` <ad0a6f0f-3ce3-4f38-b27d-2f121511d6e3.illusion.wang@nebula-matrix.com>
2026-01-27 18:10 ` [PATCH " Simon Horman
2 siblings, 2 replies; 31+ messages in thread
From: Andrew Lunn @ 2026-01-23 3:27 UTC (permalink / raw)
To: illusion.wang, y
Cc: dimon.zhao, alvin.wang, sam.chen, netdev, andrew+netdev, corbet,
kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko,
lukas.bulwahn, hawk, ast, bpf, sdf, daniel, john.fastabend,
edumazet, open list
> +config NBL_CORE
> + tristate "Nebula-matrix Ethernet Controller m18100 Family support"
> + depends on 64BIT && PCI
> + default m
> + select PAGE_POOL
> + help
> + This driver supports Nebula-matrix Ethernet Controller m18100 Family of
> + devices. For more information about this product, go to the product
> + description with smart NIC:
_CORE is a bit unusual for a driver. CORE is generally a library which
is shared by a number of drivers. It does not have a tristate string,
it is just selected by the drivers which need it.
> +static int nbl_probe(struct pci_dev *pdev,
> + const struct pci_device_id __always_unused *id)
> +{
> + struct device *dev = &pdev->dev;
> +
> + dev_dbg(dev, "nbl probe ok!\n");
Please don't have pointless debug messages. If your driver does not
even probe, you should not be submitting it.
> + return 0;
> +}
> +
> +static void nbl_remove(struct pci_dev *pdev)
> +{
> + dev_dbg(&pdev->dev, "nbl remove OK!\n");
> +}
> +
> +#define NBL_VENDOR_ID (0x1F0F)
Andrew
---
pw-bot: cr
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v3 net-next 02/15] net/nebula-matrix: add our driver architecture
2026-01-23 1:17 ` [PATCH v3 net-next 02/15] net/nebula-matrix: add our driver architecture illusion.wang
@ 2026-01-23 3:57 ` Andrew Lunn
0 siblings, 0 replies; 31+ messages in thread
From: Andrew Lunn @ 2026-01-23 3:57 UTC (permalink / raw)
To: illusion.wang
Cc: dimon.zhao, alvin.wang, sam.chen, netdev, andrew+netdev, corbet,
kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko,
lukas.bulwahn, hawk, ast, bpf, sdf, daniel, john.fastabend,
edumazet, open list
> + NBL_CHAN_MGT_TO_COMMON(&(*chan_mgt_leonis)->chan_mgt) = common;
Macros on the left hand side is pretty unusual. Why is it needed?
The cast suggests your types are messed up. You should not need casts.
> +#define nbl_err(common, fmt, ...) \
> +do { \
> + typeof(common) _common = (common); \
> + dev_err(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\
> +} while (0)
You should probably remove these and just use dev_err() or
netdev_err() directly.
> +#define NBL_COMMON_TO_PDEV(common) ((common)->pdev)
> +#define NBL_COMMON_TO_DEV(common) ((common)->dev)
> +#define NBL_COMMON_TO_DMA_DEV(common) ((common)->dma_dev)
> +#define NBL_COMMON_TO_VSI_ID(common) ((common)->vsi_id)
> +#define NBL_COMMON_TO_ETH_ID(common) ((common)->eth_id)
> +#define NBL_COMMON_TO_ETH_MODE(common) ((common)->eth_mode)
> +#define NBL_COMMON_TO_DEBUG_LVL(common) ((common)->debug_lvl)
> +
> +#define NBL_COMMON_TO_OCP_CAP(common) ((common)->is_ocp)
> +#define NBL_COMMON_TO_PCI_USING_DAC(common) ((common)->pci_using_dac)
> +#define NBL_COMMON_TO_MGT_PF(common) ((common)->mgt_pf)
> +#define NBL_COMMON_TO_PCI_FUNC_ID(common) ((common)->function)
> +#define NBL_COMMON_TO_LOGIC_ETH_ID(common) ((common)->logic_eth_id)
These are all just obfuscation. Please remove them. And not just
these. All of them. Such macros don't help.
> +int nbl_dev_init(void *p, struct nbl_init_param *param);
> +void nbl_dev_remove(void *p);
> +int nbl_dev_start(void *p, struct nbl_init_param *param);
> +void nbl_dev_stop(void *p);
Using a void * as a parameter for something like nbl_dev_start() and
nbl_dev_stop() is a red flag. You should know they type you are
passing to functions like this. In general, you want the compiler to
be doing type checking for you, so you need real types.
> +struct nbl_adapter *nbl_core_init(struct pci_dev *pdev,
> + struct nbl_init_param *param)
> +{
> + struct nbl_adapter *adapter;
> + struct nbl_common_info *common;
> + struct nbl_product_base_ops *product_base_ops;
> + int ret = 0;
Reverse Christmas tree. That applies to all functions.
> +
> + if (!pdev)
> + return NULL;
Can that happen? Don't have defensive code. If there is a real reason
this could be NULL then fine, but you might want to add a comment why
it can happen. But if this is not actually expected let pdev be
dereferenced, get the Opps, so you can debug your driver. And i don't
mean this one instance. I mean all similar tests in the driver. No
defensive code, it just hides bugs.
> +void nbl_core_remove(struct nbl_adapter *adapter)
> +{
> + struct nbl_product_base_ops *product_base_ops;
> + struct device *dev;
> +
> + dev = NBL_ADAP_TO_DEV(adapter);
> + product_base_ops = NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter);
> + nbl_dev_remove(adapter);
> + nbl_serv_remove(adapter);
> + nbl_disp_remove(adapter);
> + product_base_ops->res_remove(adapter);
> + product_base_ops->chan_remove(adapter);
> + product_base_ops->hw_remove(adapter);
> + devm_kfree(dev, adapter);
Calling devm_kfree() is unusual. Why do you do this? I suggests you
don't understand what devm_ actually does. Which is pretty scary if
you have such a basic thing wrong for a driver this size.
> static int nbl_probe(struct pci_dev *pdev,
> const struct pci_device_id __always_unused *id)
> {
> struct device *dev = &pdev->dev;
> + struct nbl_adapter *adapter = NULL;
> + struct nbl_init_param param = {{0}};
> + int err;
>
> + if (pci_enable_device(pdev)) {
> + dev_err(&pdev->dev, "Failed to enable PCI device\n");
> + return -ENODEV;
> + }
> +
> + param.pci_using_dac = true;
> + nbl_get_func_param(pdev, id->driver_data, ¶m);
> +
> + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
> + if (err) {
> + dev_err(dev, "Configure DMA 64 bit mask failed, err = %d\n",
> + err);
dev_err() is usually for a fatal error. Here you just keep going, so
it is not really an error. dev_dbg()?
Andrew
---
pw-bot: cr
^ permalink raw reply [flat|nested] 31+ messages in thread
* 回复:[PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
2026-01-23 3:27 ` Andrew Lunn
@ 2026-01-26 3:23 ` Illusion Wang
[not found] ` <ad0a6f0f-3ce3-4f38-b27d-2f121511d6e3.illusion.wang@nebula-matrix.com>
1 sibling, 0 replies; 31+ messages in thread
From: Illusion Wang @ 2026-01-26 3:23 UTC (permalink / raw)
To: Andrew Lunn, y
Cc: Dimon, Alvin, Sam, netdev, andrew+netdev, corbet, kuba, linux-doc,
lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, hawk, ast,
bpf, sdf, daniel, john.fastabend, edumazet, open list
>_CORE is a bit unusual for a driver. CORE is generally a library which
>is shared by a number of drivers. It does not have a tristate string,
>it is just selected by the drivers which need it.
Thank you for your feedback on the NBL_CORE configuration option. We appreciate
your guidance on adhering to Linux kernel naming conventions.
Why NBL_CORE?
The driver is designed to support two distinct functionalities for the
Nebula-matrix m18100 Ethernet controller:
1.Standard Ethernet/NIC functionality (via the main driver module).
2.Auxiliary device (auxiliary driver) for RDMA (exposed through auxiliary_device).
4.MDEV-based SR-IOV/VF emulation (for native device splitting).
Since shared logic (e.g., PCIe resource management, MMIO access, interrupt
handling, and common hardware abstractions) is required across both the main
driver and auxiliary/RDMA driver, we opted for a _CORE suffix to indicate:
Reusability: The module provides foundational code used by multiple sub-drivers
(main NIC + auxiliary RDMA).
Modularity: Future extensions (e.g., additional auxiliary devices) can reuse the
same core infrastructure.
So it serves a similar purpose to mlx5_core (Mellanox), where a central module
handles hardware-specific operations for multiple sub-drivers.
illusion.wang
------------------------------------------------------------------
发件人:Andrew Lunn <andrew@lunn.ch>
发送时间:2026年1月23日(周五) 11:27
收件人:Illusion Wang<illusion.wang@nebula-matrix.com>; y<y@lunn.ch>
抄 送:Dimon<dimon.zhao@nebula-matrix.com>; Alvin<alvin.wang@nebula-matrix.com>; Sam<sam.chen@nebula-matrix.com>; netdev<netdev@vger.kernel.org>; "andrew+netdev"<andrew+netdev@lunn.ch>; corbet<corbet@lwn.net>; kuba<kuba@kernel.org>; "linux-doc"<linux-doc@vger.kernel.org>; lorenzo<lorenzo@kernel.org>; pabeni<pabeni@redhat.com>; horms<horms@kernel.org>; "vadim.fedorenko"<vadim.fedorenko@linux.dev>; "lukas.bulwahn"<lukas.bulwahn@redhat.com>; hawk<hawk@kernel.org>; ast<ast@kernel.org>; bpf<bpf@vger.kernel.org>; sdf<sdf@fomichev.me>; daniel<daniel@iogearbox.net>; "john.fastabend"<john.fastabend@gmail.com>; edumazet<edumazet@google.com>; open list<linux-kernel@vger.kernel.org>
主 题:Re: [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
> +config NBL_CORE
> + tristate "Nebula-matrix Ethernet Controller m18100 Family support"
> + depends on 64BIT && PCI
> + default m
> + select PAGE_POOL
> + help
> + This driver supports Nebula-matrix Ethernet Controller m18100 Family of
> + devices. For more information about this product, go to the product
> + description with smart NIC:
_CORE is a bit unusual for a driver. CORE is generally a library which
is shared by a number of drivers. It does not have a tristate string,
it is just selected by the drivers which need it.
> +static int nbl_probe(struct pci_dev *pdev,
> + const struct pci_device_id __always_unused *id)
> +{
> + struct device *dev = &pdev->dev;
> +
> + dev_dbg(dev, "nbl probe ok!\n");
Please don't have pointless debug messages. If your driver does not
even probe, you should not be submitting it.
> + return 0;
> +}
> +
> +static void nbl_remove(struct pci_dev *pdev)
> +{
> + dev_dbg(&pdev->dev, "nbl remove OK!\n");
> +}
> +
> +#define NBL_VENDOR_ID (0x1F0F)
Andrew
---
pw-bot: cr
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
` (14 preceding siblings ...)
2026-01-23 3:15 ` [PATCH v3 net-next 12/15] net/nebula-matrix: add common/ctrl dev init/reinit operation illusion.wang
@ 2026-01-27 18:06 ` Simon Horman
2026-01-27 18:53 ` Jakub Kicinski
15 siblings, 1 reply; 31+ messages in thread
From: Simon Horman @ 2026-01-27 18:06 UTC (permalink / raw)
To: illusion.wang
Cc: dimon.zhao, alvin.wang, sam.chen, netdev, andrew+netdev, corbet,
kuba, linux-doc, lorenzo, pabeni, vadim.fedorenko, lukas.bulwahn,
hawk, ast, bpf, sdf, daniel, john.fastabend, edumazet, open list
...
> 55 files changed, 26753 insertions(+)
Hi,
I realise this is down from the 45kLoC in v1,
but 26kLoC is still too much for a patchset.
As per Jakub's advice in his review of v1 [*],
Please try to create a minimal driver in the order of ~5kLoC.
Thanks!
[*] https://lore.kernel.org/netdev/20260109162051.42f71451@kernel.org/
...
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
2026-01-23 1:17 ` [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang
2026-01-23 1:31 ` Randy Dunlap
2026-01-23 3:27 ` Andrew Lunn
@ 2026-01-27 18:10 ` Simon Horman
2 siblings, 0 replies; 31+ messages in thread
From: Simon Horman @ 2026-01-27 18:10 UTC (permalink / raw)
To: illusion.wang
Cc: dimon.zhao, alvin.wang, sam.chen, netdev, andrew+netdev, corbet,
kuba, linux-doc, lorenzo, pabeni, vadim.fedorenko, lukas.bulwahn,
hawk, ast, bpf, sdf, daniel, john.fastabend, edumazet, open list
On Fri, Jan 23, 2026 at 09:17:38AM +0800, illusion.wang wrote:
...
> diff --git a/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst b/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst
> new file mode 100644
> index 000000000000..c763042ce3ee
> --- /dev/null
> +++ b/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst
> @@ -0,0 +1,47 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +============================================================
> +Linux Base Driver for Nebula-matrix M18100-NIC family
> +============================================================
> +
This file should be added to Documentation/networking/device_drivers/ethernet/index.rst
Flagged by make htmldocs
...
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [v3,net-next,04/15] net/nebula-matrix: add channel layer implementation and the corresponding hw interfaces
2026-01-23 1:17 ` [PATCH v3 net-next 04/15] net/nebula-matrix: add channel layer implementation and the corresponding hw interfaces illusion.wang
@ 2026-01-27 18:15 ` Simon Horman
0 siblings, 0 replies; 31+ messages in thread
From: Simon Horman @ 2026-01-27 18:15 UTC (permalink / raw)
To: illusion.wang
Cc: Simon Horman, dimon.zhao, john.fastabend, sam.chen, sdf,
lukas.bulwahn, kuba, vadim.fedorenko, edumazet, hawk, bpf, corbet,
pabeni, lorenzo, alvin.wang, andrew+netdev, netdev, daniel,
linux-doc, ast, open list
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net/nebula-matrix: add intr resource implementation
This patch adds MSI-X interrupt vector management for the Nebula Matrix
network driver, including dynamic allocation of interrupt vectors,
mapping table management, and interrupt enable/disable support.
> diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
> index f6fd151141d5..b8653e0039d7 100644
> --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
> +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
[ ... ]
> @@ -10,12 +10,20 @@
> #include "nbl_core.h"
> #include "nbl_hw.h"
>
> +struct nbl_resource_mgt;
> +
> #define NBL_RES_MGT_TO_COMMON(res_mgt) ((res_mgt)->common)
> #define NBL_RES_MGT_TO_COMMON_OPS(res_mgt) (&((res_mgt)->common_ops))
> #define NBL_RES_MGT_TO_DEV(res_mgt) \
> NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt))
> #define NBL_RES_MGT_TO_DMA_DEV(res_mgt) \
> NBL_COMMON_TO_DMA_DEV(NBL_RES_MGT_TO_COMMON(res_mgt))
> +#define NBL_RES_MGT_TO_INTR_MGT(res_mgt) ((res_mgt)->intr_mgt)
> +#define NBL_RES_MGT_TO_QUEUE_MGT(res_mgt) ((res_mgt)->queue_mgt)
> +#define NBL_RES_MGT_TO_TXRX_MGT(res_mgt) ((res_mgt)->txrx_mgt)
> +#define NBL_RES_MGT_TO_VSI_MGT(res_mgt) ((res_mgt)->vsi_mgt)
> +#define NBL_RES_MGT_TO_INTR_MGT(res_mgt) ((res_mgt)->intr_mgt)
^^^^^^^^^^^^^^^^^^^^^^^^
Is NBL_RES_MGT_TO_INTR_MGT intentionally defined twice here? It appears
on both lines with identical definitions, which will generate a compiler
warning for macro redefinition.
> +#define NBL_RES_MGT_TO_PROD_OPS(res_mgt) ((res_mgt)->product_ops)
[ ... ]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs
2026-01-27 18:06 ` [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs Simon Horman
@ 2026-01-27 18:53 ` Jakub Kicinski
2026-01-27 22:21 ` Stephen Hemminger
` (2 more replies)
0 siblings, 3 replies; 31+ messages in thread
From: Jakub Kicinski @ 2026-01-27 18:53 UTC (permalink / raw)
To: Simon Horman, illusion.wang
Cc: dimon.zhao, alvin.wang, sam.chen, netdev, andrew+netdev, corbet,
linux-doc, lorenzo, pabeni, vadim.fedorenko, lukas.bulwahn, hawk,
ast, bpf, sdf, daniel, john.fastabend, edumazet, open list
On Tue, 27 Jan 2026 18:06:49 +0000 Simon Horman wrote:
> As per Jakub's advice in his review of v1 [*],
> Please try to create a minimal driver in the order of ~5kLoC.
Perhaps other maintainers will disagree, but for ease of review
and to avoid the huge reposts perhaps we should consider merging
something like patches 1 and 3* first, as the initial series.
And then you can build up the driver from there in reasonable,
5kLoC-at-a-time series?
* patch 3 currently includes things which do not seem machine
generated / like register descriptors like structs added to
nbl_def_channel.h and nbl_include.h
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs
2026-01-27 18:53 ` Jakub Kicinski
@ 2026-01-27 22:21 ` Stephen Hemminger
2026-01-28 2:46 ` 回复:[PATCH " Illusion Wang
2026-01-28 9:16 ` [PATCH " Simon Horman
2 siblings, 0 replies; 31+ messages in thread
From: Stephen Hemminger @ 2026-01-27 22:21 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Simon Horman, illusion.wang, dimon.zhao, alvin.wang, sam.chen,
netdev, andrew+netdev, corbet, linux-doc, lorenzo, pabeni,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
On Tue, 27 Jan 2026 10:53:04 -0800
Jakub Kicinski <kuba@kernel.org> wrote:
> On Tue, 27 Jan 2026 18:06:49 +0000 Simon Horman wrote:
> > As per Jakub's advice in his review of v1 [*],
> > Please try to create a minimal driver in the order of ~5kLoC.
>
> Perhaps other maintainers will disagree, but for ease of review
> and to avoid the huge reposts perhaps we should consider merging
> something like patches 1 and 3* first, as the initial series.
> And then you can build up the driver from there in reasonable,
> 5kLoC-at-a-time series?
>
> * patch 3 currently includes things which do not seem machine
> generated / like register descriptors like structs added to
> nbl_def_channel.h and nbl_include.h
>
Part of the size problem is that the driver has multiple layers:
resources, channels, etc and lots of code that tries to be common
across OS's. This can lead to bloat. Very few drivers have done
that successfully.
^ permalink raw reply [flat|nested] 31+ messages in thread
* 回复:[PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs
2026-01-27 18:53 ` Jakub Kicinski
2026-01-27 22:21 ` Stephen Hemminger
@ 2026-01-28 2:46 ` Illusion Wang
2026-01-28 9:16 ` [PATCH " Simon Horman
2 siblings, 0 replies; 31+ messages in thread
From: Illusion Wang @ 2026-01-28 2:46 UTC (permalink / raw)
To: Jakub Kicinski, Simon Horman
Cc: Dimon, Alvin, Sam, netdev, andrew+netdev, corbet, linux-doc,
lorenzo, pabeni, vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf,
sdf, daniel, john.fastabend, edumazet, open list
Thank you for your feedback
patch3 primarily includes code that does not require extensive review,
such as the message codes and message body structures defined in
nbl_def_channel.h. These components are generally stable and
used for PF/VF communication, which is why they were initially
included in Patch 3. However, based on review feedback, I will
extract these manually written communication protocol definitions.
Additionally, I will make another round of comprehensive code
reduction to meet the submission requirements as much as possible.
------------------------------------------------------------------
发件人:Jakub Kicinski <kuba@kernel.org>
发送时间:2026年1月28日(周三) 02:53
收件人:Simon Horman<horms@kernel.org>; Illusion Wang<illusion.wang@nebula-matrix.com>
抄 送:Dimon<dimon.zhao@nebula-matrix.com>; Alvin<alvin.wang@nebula-matrix.com>; Sam<sam.chen@nebula-matrix.com>; netdev<netdev@vger.kernel.org>; "andrew+netdev"<andrew+netdev@lunn.ch>; corbet<corbet@lwn.net>; "linux-doc"<linux-doc@vger.kernel.org>; lorenzo<lorenzo@kernel.org>; pabeni<pabeni@redhat.com>; "vadim.fedorenko"<vadim.fedorenko@linux.dev>; "lukas.bulwahn"<lukas.bulwahn@redhat.com>; hawk<hawk@kernel.org>; ast<ast@kernel.org>; bpf<bpf@vger.kernel.org>; sdf<sdf@fomichev.me>; daniel<daniel@iogearbox.net>; "john.fastabend"<john.fastabend@gmail.com>; edumazet<edumazet@google.com>; open list<linux-kernel@vger.kernel.org>
主 题:Re: [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs
On Tue, 27 Jan 2026 18:06:49 +0000 Simon Horman wrote:
> As per Jakub's advice in his review of v1 [*],
> Please try to create a minimal driver in the order of ~5kLoC.
Perhaps other maintainers will disagree, but for ease of review
and to avoid the huge reposts perhaps we should consider merging
something like patches 1 and 3* first, as the initial series.
And then you can build up the driver from there in reasonable,
5kLoC-at-a-time series?
* patch 3 currently includes things which do not seem machine
generated / like register descriptors like structs added to
nbl_def_channel.h and nbl_include.h
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs
2026-01-27 18:53 ` Jakub Kicinski
2026-01-27 22:21 ` Stephen Hemminger
2026-01-28 2:46 ` 回复:[PATCH " Illusion Wang
@ 2026-01-28 9:16 ` Simon Horman
2 siblings, 0 replies; 31+ messages in thread
From: Simon Horman @ 2026-01-28 9:16 UTC (permalink / raw)
To: Jakub Kicinski
Cc: illusion.wang, dimon.zhao, alvin.wang, sam.chen, netdev,
andrew+netdev, corbet, linux-doc, lorenzo, pabeni,
vadim.fedorenko, lukas.bulwahn, hawk, ast, bpf, sdf, daniel,
john.fastabend, edumazet, open list
On Tue, Jan 27, 2026 at 10:53:04AM -0800, Jakub Kicinski wrote:
> On Tue, 27 Jan 2026 18:06:49 +0000 Simon Horman wrote:
> > As per Jakub's advice in his review of v1 [*],
> > Please try to create a minimal driver in the order of ~5kLoC.
>
> Perhaps other maintainers will disagree, but for ease of review
> and to avoid the huge reposts perhaps we should consider merging
> something like patches 1 and 3* first, as the initial series.
> And then you can build up the driver from there in reasonable,
> 5kLoC-at-a-time series?
>
> * patch 3 currently includes things which do not seem machine
> generated / like register descriptors like structs added to
> nbl_def_channel.h and nbl_include.h
Without having looked into the details: yes, I think some
sort of staged approached could help here.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: 回复:[PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
[not found] ` <ad0a6f0f-3ce3-4f38-b27d-2f121511d6e3.illusion.wang@nebula-matrix.com>
@ 2026-01-29 13:18 ` Andrew Lunn
2026-01-30 2:23 ` 回复:回复:[PATCH " Sam
0 siblings, 1 reply; 31+ messages in thread
From: Andrew Lunn @ 2026-01-29 13:18 UTC (permalink / raw)
To: Illusion Wang
Cc: y, Dimon, Alvin, Sam, netdev, andrew+netdev, corbet, kuba,
linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn,
hawk, ast, bpf, sdf, daniel, john.fastabend, edumazet, open list
On Mon, Jan 26, 2026 at 10:18:52AM +0800, Illusion Wang wrote:
>
> >_CORE is a bit unusual for a driver. CORE is generally a library which
> >is shared by a number of drivers. It does not have a tristate string,
> >it is just selected by the drivers which need it.
>
> Thank you for your feedback on the NBL_CORE configuration option. We appreciate
> your guidance on adhering to Linux kernel naming conventions.
>
> Why NBL_CORE?
> The driver is designed to support two distinct functionalities for the
> Nebula-matrix m18100 Ethernet controller:
>
> 1.Standard Ethernet/NIC functionality (via the main driver module).
> 2.Auxiliary device (auxiliary driver) for RDMA (exposed through
> auxiliary_device).
> 4.MDEV-based SR-IOV/VF emulation (for native device splitting).
> Since shared logic (e.g., PCIe resource management, MMIO access, interrupt
> handling, and common hardware abstractions) is required across both the main
> driver and auxiliary/RDMA driver, we opted for a _CORE suffix to indicate:
You can split this two ways:
A true core, which does not provide any devices, just core functions
to access the hardware. Then layered on top if this you have kernel
modules with provide Ethernet, RDMA, MDEV etc. Such kernel modules
would then depend on the core, using Kconfig primitives. And the core
itself would not have a tri-state string, since on its own it is
useless.
Or
If the core always provides Ethernet, call it an Ethernet driver, not
core. The other kernel modules would then depends on having the
Ethernet driver, using Kconfig primitives.
This all about naming, not architecture. Splitting it up into multiple
kernel modules is good, but your naming and descriptions of what these
modules do matters.
Andrew
^ permalink raw reply [flat|nested] 31+ messages in thread
* 回复:回复:[PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
2026-01-29 13:18 ` Andrew Lunn
@ 2026-01-30 2:23 ` Sam
2026-01-30 3:04 ` Andrew Lunn
0 siblings, 1 reply; 31+ messages in thread
From: Sam @ 2026-01-30 2:23 UTC (permalink / raw)
To: Andrew Lunn, Illusion Wang
Cc: y, Dimon, Alvin, netdev, andrew+netdev, corbet, kuba, linux-doc,
lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, hawk, ast,
bpf, sdf, daniel, john.fastabend, edumazet, open list
Thank you for your feedback. I understand what you mean.The reasons why
we cannot change the module name now are as follows:
Our driver has already been integrated into multiple communities, and
the module name is nbl_core. Many customers have already deployed it. If
we change the name, there will be a critical problem of conflict between
the two modules during subsequent online driver upgrades.
PS: Our card's functionality is comparable to Mellanox CX, and the design
of nbl_core was initially based on their design principles.
------------------------------------------------------------------
发件人:Andrew Lunn <andrew@lunn.ch>
发送时间:2026年1月29日(周四) 21:18
收件人:Illusion Wang<illusion.wang@nebula-matrix.com>
抄 送:y<y@lunn.ch>; Dimon<dimon.zhao@nebula-matrix.com>; Alvin<alvin.wang@nebula-matrix.com>; Sam<sam.chen@nebula-matrix.com>; netdev<netdev@vger.kernel.org>; "andrew+netdev"<andrew+netdev@lunn.ch>; corbet<corbet@lwn.net>; kuba<kuba@kernel.org>; "linux-doc"<linux-doc@vger.kernel.org>; lorenzo<lorenzo@kernel.org>; pabeni<pabeni@redhat.com>; horms<horms@kernel.org>; "vadim.fedorenko"<vadim.fedorenko@linux.dev>; "lukas.bulwahn"<lukas.bulwahn@redhat.com>; hawk<hawk@kernel.org>; ast<ast@kernel.org>; bpf<bpf@vger.kernel.org>; sdf<sdf@fomichev.me>; daniel<daniel@iogearbox.net>; "john.fastabend"<john.fastabend@gmail.com>; edumazet<edumazet@google.com>; open list<linux-kernel@vger.kernel.org>
主 题:Re: 回复:[PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
On Mon, Jan 26, 2026 at 10:18:52AM +0800, Illusion Wang wrote:
>
> >_CORE is a bit unusual for a driver. CORE is generally a library which
> >is shared by a number of drivers. It does not have a tristate string,
> >it is just selected by the drivers which need it.
>
> Thank you for your feedback on the NBL_CORE configuration option. We appreciate
> your guidance on adhering to Linux kernel naming conventions.
>
> Why NBL_CORE?
> The driver is designed to support two distinct functionalities for the
> Nebula-matrix m18100 Ethernet controller:
>
> 1.Standard Ethernet/NIC functionality (via the main driver module).
> 2.Auxiliary device (auxiliary driver) for RDMA (exposed through
> auxiliary_device).
> 4.MDEV-based SR-IOV/VF emulation (for native device splitting).
> Since shared logic (e.g., PCIe resource management, MMIO access, interrupt
> handling, and common hardware abstractions) is required across both the main
> driver and auxiliary/RDMA driver, we opted for a _CORE suffix to indicate:
You can split this two ways:
A true core, which does not provide any devices, just core functions
to access the hardware. Then layered on top if this you have kernel
modules with provide Ethernet, RDMA, MDEV etc. Such kernel modules
would then depend on the core, using Kconfig primitives. And the core
itself would not have a tri-state string, since on its own it is
useless.
Or
If the core always provides Ethernet, call it an Ethernet driver, not
core. The other kernel modules would then depends on having the
Ethernet driver, using Kconfig primitives.
This all about naming, not architecture. Splitting it up into multiple
kernel modules is good, but your naming and descriptions of what these
modules do matters.
Andrew
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: 回复:回复:[PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
2026-01-30 2:23 ` 回复:回复:[PATCH " Sam
@ 2026-01-30 3:04 ` Andrew Lunn
2026-01-30 3:31 ` 回复:回复:回复:[PATCH " Sam
0 siblings, 1 reply; 31+ messages in thread
From: Andrew Lunn @ 2026-01-30 3:04 UTC (permalink / raw)
To: Sam
Cc: Illusion Wang, Dimon, Alvin, netdev, andrew+netdev, corbet, kuba,
linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn,
hawk, ast, bpf, sdf, daniel, john.fastabend, edumazet, open list
On Fri, Jan 30, 2026 at 10:23:50AM +0800, Sam wrote:
> Thank you for your feedback. I understand what you mean.The reasons why
> we cannot change the module name now are as follows:
> Our driver has already been integrated into multiple communities, and
> the module name is nbl_core. Many customers have already deployed it. If
> we change the name, there will be a critical problem of conflict between
> the two modules during subsequent online driver upgrades.
To a large extent, mainline does not care about your vendor driver,
and it does not care about any compatibility issues between your
vendor driver and mainline.
I've not looked at your driver in detail yet so i cannot comment on
your driver in particular. But we often see vendor drivers do things
mainline does not allow. Custom ioctl handlers, files in /sys, write
APIs in debugfs, statistic counters wrongly grouped, etc. In a vendor
driver this is O.K, this is all open source, you are free to do what
you want. But when it comes to mainline, you have to keep to the
mainline rules. Such code will need to be removed, or reworked,
breaking compatibility with your vendor driver.
So it can be advantages to have different names, it then become clear
if a customer is using the vendor driver, or is using mainline.
Andrew
^ permalink raw reply [flat|nested] 31+ messages in thread
* 回复:回复:回复:[PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
2026-01-30 3:04 ` Andrew Lunn
@ 2026-01-30 3:31 ` Sam
0 siblings, 0 replies; 31+ messages in thread
From: Sam @ 2026-01-30 3:31 UTC (permalink / raw)
To: Andrew Lunn
Cc: Illusion Wang, Dimon, Alvin, netdev, andrew+netdev, corbet, kuba,
linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn,
hawk, ast, bpf, sdf, daniel, john.fastabend, edumazet, open list
Thank you for your feedback. You might have misunderstood me.
Our difficulties lie in the following:
1. Assuming only the mainline version changes the name (Assume name "nbl"),
and our regularly released driver doesn't change its name, then when
customers upgrade to a new kernel (containing the "nbl" driver),
and then want to update our regularly released driver (named "nbl_core"),
the module (ko) conflict will occur.
2. If both our mainline and regularly released drivers change their names,
then customers who are already using the "nbl_core" driver will also
encounter conflict issues when updating to the new driver "nbl".
Is it possible to do this: our net driver is also modified to be a driver based
on the auxiliary bus, while the PCIe driver only handles PCIe-related processing,
and these two drivers share a single kernel module (ko), namely "nbl_core"?
------------------------------------------------------------------
发件人:Andrew Lunn <andrew@lunn.ch>
发送时间:2026年1月30日(周五) 11:04
收件人:Sam<sam.chen@nebula-matrix.com>
抄 送:Illusion Wang<illusion.wang@nebula-matrix.com>; Dimon<dimon.zhao@nebula-matrix.com>; Alvin<alvin.wang@nebula-matrix.com>; netdev<netdev@vger.kernel.org>; "andrew+netdev"<andrew+netdev@lunn.ch>; corbet<corbet@lwn.net>; kuba<kuba@kernel.org>; "linux-doc"<linux-doc@vger.kernel.org>; lorenzo<lorenzo@kernel.org>; pabeni<pabeni@redhat.com>; horms<horms@kernel.org>; "vadim.fedorenko"<vadim.fedorenko@linux.dev>; "lukas.bulwahn"<lukas.bulwahn@redhat.com>; hawk<hawk@kernel.org>; ast<ast@kernel.org>; bpf<bpf@vger.kernel.org>; sdf<sdf@fomichev.me>; daniel<daniel@iogearbox.net>; "john.fastabend"<john.fastabend@gmail.com>; edumazet<edumazet@google.com>; open list<linux-kernel@vger.kernel.org>
主 题:Re: 回复:回复:[PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework
On Fri, Jan 30, 2026 at 10:23:50AM +0800, Sam wrote:
> Thank you for your feedback. I understand what you mean.The reasons why
> we cannot change the module name now are as follows:
> Our driver has already been integrated into multiple communities, and
> the module name is nbl_core. Many customers have already deployed it. If
> we change the name, there will be a critical problem of conflict between
> the two modules during subsequent online driver upgrades.
To a large extent, mainline does not care about your vendor driver,
and it does not care about any compatibility issues between your
vendor driver and mainline.
I've not looked at your driver in detail yet so i cannot comment on
your driver in particular. But we often see vendor drivers do things
mainline does not allow. Custom ioctl handlers, files in /sys, write
APIs in debugfs, statistic counters wrongly grouped, etc. In a vendor
driver this is O.K, this is all open source, you are free to do what
you want. But when it comes to mainline, you have to keep to the
mainline rules. Such code will need to be removed, or reworked,
breaking compatibility with your vendor driver.
So it can be advantages to have different names, it then become clear
if a customer is using the vendor driver, or is using mainline.
Andrew
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2026-01-30 3:31 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-23 1:17 [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang
2026-01-23 1:31 ` Randy Dunlap
2026-01-23 3:27 ` Andrew Lunn
2026-01-26 3:23 ` 回复:[PATCH " Illusion Wang
[not found] ` <ad0a6f0f-3ce3-4f38-b27d-2f121511d6e3.illusion.wang@nebula-matrix.com>
2026-01-29 13:18 ` Andrew Lunn
2026-01-30 2:23 ` 回复:回复:[PATCH " Sam
2026-01-30 3:04 ` Andrew Lunn
2026-01-30 3:31 ` 回复:回复:回复:[PATCH " Sam
2026-01-27 18:10 ` [PATCH " Simon Horman
2026-01-23 1:17 ` [PATCH v3 net-next 02/15] net/nebula-matrix: add our driver architecture illusion.wang
2026-01-23 3:57 ` Andrew Lunn
2026-01-23 1:17 ` [PATCH v3 net-next 03/15] net/nebula-matrix: add machine-generated headers and chip definitions illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 04/15] net/nebula-matrix: add channel layer implementation and the corresponding hw interfaces illusion.wang
2026-01-27 18:15 ` [v3,net-next,04/15] " Simon Horman
2026-01-23 1:17 ` [PATCH v3 net-next 05/15] net/nebula-matrix: add resource layer common part implementation illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 06/15] net/nebula-matrix: add intr resource implementation illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 07/15] net/nebula-matrix: add queue " illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 08/15] net/nebula-matrix: add vsi " illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 09/15] net/nebula-matrix: add txrx resource interfaces and tx func illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 10/15] net/nebula-matrix: add txrx interrupt handling logic illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 11/15] net/nebula-matrix: add Dispatch layer implementation illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 13/15] net/nebula-matrix: add net dev init/reinit operation illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 14/15] net/nebula-matrix: add common dev start/stop operation illusion.wang
2026-01-23 1:17 ` [PATCH v3 net-next 15/15] net/nebula-matrix: add net " illusion.wang
2026-01-23 3:15 ` [PATCH v3 net-next 12/15] net/nebula-matrix: add common/ctrl dev init/reinit operation illusion.wang
2026-01-27 18:06 ` [PATCH v3 net-next 00/15] nbl driver for Nebulamatrix NICs Simon Horman
2026-01-27 18:53 ` Jakub Kicinski
2026-01-27 22:21 ` Stephen Hemminger
2026-01-28 2:46 ` 回复:[PATCH " Illusion Wang
2026-01-28 9:16 ` [PATCH " Simon Horman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox