* [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs
@ 2026-01-09 10:01 illusion.wang
2026-01-09 10:01 ` [PATCH v2 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang
` (15 more replies)
0 siblings, 16 replies; 19+ messages in thread
From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw)
To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev
Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms,
vadim.fedorenko, lukas.bulwahn, edumazet, open list
The patch series add the nbl driver, which will support nebula-matrix
18100 and 18110 series of network cards.
This submission is the first phase. which includes the PF-based and
VF-based Ethernet transmit and receive functionality. Once this is
merged. will submit addition patches to implement support for other
features. such as ethtool support, debugfs support and etc.
Changes v1->v2
Link to v1: https://lore.kernel.org/netdev/20251223035113.31122-1-illusion.wang@nebula-matrix.com/
1.Format Issues and Compilation Issues
- Paolo Abeni
2.add sysfs patch and drop coexisting patch
- Andrew Lunn
3.delete some unimportant ndo operations
4.add machine generated headers patch
5.Modify the issues found in patch1-2 and apply the same fixes to other
patches
6.modify issues found by nipa
illusion.wang (15):
net/nebula-matrix: add minimum nbl build framework
net/nebula-matrix: add simple probe/remove
net/nebula-matrix: add HW layer definitions and implementation
net/nebula-matrix: add machine-generated headers and chip definitions
net/nebula-matrix: add channel layer definitions and implementation
net/nebula-matrix: add resource layer definitions and implementation
net/nebula-matrix: add intr resource definitions and implementation
net/nebula-matrix: add vsi, queue, adminq resource definitions and
implementation
net/nebula-matrix: add flow resource definitions and implementation
net/nebula-matrix: add txrx resource definitions and implementation
net/nebula-matrix: add Dispatch layer definitions and implementation
net/nebula-matrix: add Service layer definitions and implementation
net/nebula-matrix: add Dev init,remove operation
net/nebula-matrix: add Dev start, stop operation
net/nebula-matrix: add st_sysfs and vf name sysfs
.../ethernet/nebula-matrix/m18100.rst | 52 +
MAINTAINERS | 10 +
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/nebula-matrix/Kconfig | 39 +
drivers/net/ethernet/nebula-matrix/Makefile | 6 +
.../net/ethernet/nebula-matrix/nbl/Makefile | 29 +
.../nbl/nbl_channel/nbl_channel.c | 1482 ++++++
.../nbl/nbl_channel/nbl_channel.h | 205 +
.../nebula-matrix/nbl/nbl_common/nbl_common.c | 784 +++
.../nebula-matrix/nbl/nbl_common/nbl_common.h | 54 +
.../net/ethernet/nebula-matrix/nbl/nbl_core.h | 144 +
.../nebula-matrix/nbl/nbl_core/nbl_dev.c | 3194 ++++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_dev.h | 270 ++
.../nebula-matrix/nbl/nbl_core/nbl_dispatch.c | 4265 +++++++++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_dispatch.h | 78 +
.../nebula-matrix/nbl/nbl_core/nbl_service.c | 3804 +++++++++++++++
.../nebula-matrix/nbl/nbl_core/nbl_service.h | 240 +
.../nebula-matrix/nbl/nbl_core/nbl_sysfs.c | 85 +
.../nebula-matrix/nbl/nbl_core/nbl_sysfs.h | 20 +
.../nebula-matrix/nbl/nbl_hw/nbl_adminq.c | 1446 ++++++
.../nebula-matrix/nbl/nbl_hw/nbl_adminq.h | 160 +
.../nebula-matrix/nbl/nbl_hw/nbl_hw.h | 172 +
.../nbl_hw/nbl_hw_leonis/base/nbl_datapath.h | 11 +
.../nbl_hw_leonis/base/nbl_datapath_dped.h | 2152 +++++++++
.../nbl_hw_leonis/base/nbl_datapath_dstore.h | 929 ++++
.../nbl_hw_leonis/base/nbl_datapath_ucar.h | 414 ++
.../nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h | 10 +
.../nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h | 665 +++
.../nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h | 1397 ++++++
.../nbl_hw/nbl_hw_leonis/nbl_flow_leonis.c | 2268 +++++++++
.../nbl_hw/nbl_hw_leonis/nbl_flow_leonis.h | 204 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 3186 ++++++++++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h | 1714 +++++++
.../nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c | 3863 +++++++++++++++
.../nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h | 12 +
.../nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c | 1430 ++++++
.../nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h | 23 +
.../nbl_hw_leonis/nbl_resource_leonis.c | 1067 +++++
.../nbl_hw_leonis/nbl_resource_leonis.h | 28 +
.../nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h | 156 +
.../nebula-matrix/nbl/nbl_hw/nbl_interrupt.c | 448 ++
.../nebula-matrix/nbl/nbl_hw/nbl_interrupt.h | 13 +
.../nebula-matrix/nbl/nbl_hw/nbl_queue.c | 60 +
.../nebula-matrix/nbl/nbl_hw/nbl_queue.h | 11 +
.../nebula-matrix/nbl/nbl_hw/nbl_resource.c | 444 ++
.../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 878 ++++
.../nebula-matrix/nbl/nbl_hw/nbl_txrx.c | 2150 +++++++++
.../nebula-matrix/nbl/nbl_hw/nbl_txrx.h | 184 +
.../nebula-matrix/nbl/nbl_hw/nbl_vsi.c | 168 +
.../nebula-matrix/nbl/nbl_hw/nbl_vsi.h | 12 +
.../nbl/nbl_include/nbl_def_channel.h | 715 +++
.../nbl/nbl_include/nbl_def_common.h | 410 ++
.../nbl/nbl_include/nbl_def_dev.h | 32 +
.../nbl/nbl_include/nbl_def_dispatch.h | 190 +
.../nbl/nbl_include/nbl_def_hw.h | 157 +
.../nbl/nbl_include/nbl_def_resource.h | 183 +
.../nbl/nbl_include/nbl_def_service.h | 156 +
.../nbl/nbl_include/nbl_include.h | 542 +++
.../nbl/nbl_include/nbl_product_base.h | 20 +
.../net/ethernet/nebula-matrix/nbl/nbl_main.c | 435 ++
61 files changed, 43278 insertions(+)
create mode 100644 Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst
create mode 100644 drivers/net/ethernet/nebula-matrix/Kconfig
create mode 100644 drivers/net/ethernet/nebula-matrix/Makefile
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/Makefile
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h
create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c
--
2.47.3
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH v2 net-next 01/15] net/nebula-matrix: add minimum nbl build framework 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 02/15] net/nebula-matrix: add simple probe/remove illusion.wang ` (14 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list 1.Add nbl min build infrastructure for nbl driver. 2.Implemented the framework of pci device initialization. Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../ethernet/nebula-matrix/m18100.rst | 52 ++++++++ MAINTAINERS | 10 ++ drivers/net/ethernet/Kconfig | 1 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/nebula-matrix/Kconfig | 39 ++++++ drivers/net/ethernet/nebula-matrix/Makefile | 6 + .../net/ethernet/nebula-matrix/nbl/Makefile | 11 ++ .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 29 +++++ .../nbl/nbl_include/nbl_include.h | 24 ++++ .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 117 ++++++++++++++++++ 10 files changed, 290 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst create mode 100644 drivers/net/ethernet/nebula-matrix/Kconfig create mode 100644 drivers/net/ethernet/nebula-matrix/Makefile create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/Makefile create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c diff --git a/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst b/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst new file mode 100644 index 000000000000..e1b63a2bafe0 --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/nebula-matrix/m18100.rst @@ -0,0 +1,52 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============================================================ +Linux Base Driver for Nebula-matrix M18100-NIC family +============================================================ + +Overview: +========= +M18100-NIC is a series of network interface card for the Data Center Area. + +The driver supports link-speed 100GbE/25GE/10GE. + +M18100-NIC devices support SR-IOV. This driver is used for both of Physical +Function(PF) and Virtual Function(VF). + +M18100-NIC devices support MSI-X interrupt vector for each Tx/Rx queue and +interrupt moderation. + +M18100-NIC devices support also various offload features such as checksum offload, +Receive-Side Scaling(RSS). + + +Supported PCI vendor ID/device IDs: +=================================== + +1f0f:3403 - M18110 Family PF +1f0f:3404 - M18110 Lx Family PF +1f0f:3405 - M18110 Family BASE-T PF +1f0f:3406 - M18110 Lx Family BASE-T PF +1f0f:3407 - M18110 Family OCP PF +1f0f:3408 - M18110 Lx Family OCP PF +1f0f:3409 - M18110 Family BASE-T OCP PF +1f0f:340a - M18110 Lx Family BASE-T OCP PF +1f0f:340b - M18100 Family PF +1f0f:340c - M18100 Lx Family PF +1f0f:340d - M18100 Family BASE-T PF +1f0f:340e - M18100 Lx Family BASE-T PF +1f0f:340f - M18100 Family OCP PF +1f0f:3410 - M18100 Lx Family OCP PF +1f0f:3411 - M18100 Family BASE-T OCP PF +1f0f:3412 - M18100 Lx Family BASE-T OCP PF +1f0f:3413 - M18100 Family Virtual Function + +Support +======= + +For more information about M18100-NIC, please visit the following URL: +https://www.nebula-matrix.com/ + +If an issue is identified with the released source code on the supported kernel +with a supported adapter, email the specific information related to the issue to +open@nebula-matrix.com. diff --git a/MAINTAINERS b/MAINTAINERS index 765ad2daa218..6cf58be32a17 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -18003,6 +18003,16 @@ F: Documentation/devicetree/bindings/hwmon/nuvoton,nct7363.yaml F: Documentation/hwmon/nct7363.rst F: drivers/hwmon/nct7363.c +NEBULA-MATRIX ETHERNET DRIVER (nebula-matrix) +M: llusion.Wang <illusion.wang@nebula-matrix.com> +M: Dimon.Zhao <dimon.zhao@nebula-matrix.com> +M: Alvin.Wang <alvin.wang@nebula-matrix.com> +M: Sam Chen <sam.chen@nebula-matrix.com> +L: netdev@vger.kernel.org +S: Maintained +F: Documentation/networking/device_drivers/ethernet/nebula-matrix/* +F: drivers/net/ethernet/nebula-matrix/ + NETCONSOLE M: Breno Leitao <leitao@debian.org> S: Maintained diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index 4a1b368ca7e6..4753e203ba85 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -143,6 +143,7 @@ config FEALNX source "drivers/net/ethernet/ni/Kconfig" source "drivers/net/ethernet/natsemi/Kconfig" +source "drivers/net/ethernet/nebula-matrix/Kconfig" source "drivers/net/ethernet/neterion/Kconfig" source "drivers/net/ethernet/netronome/Kconfig" source "drivers/net/ethernet/8390/Kconfig" diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index 2e18df8ca8ec..fec3cbf75f10 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -69,6 +69,7 @@ obj-$(CONFIG_NET_VENDOR_MUCSE) += mucse/ obj-$(CONFIG_NET_VENDOR_MYRI) += myricom/ obj-$(CONFIG_FEALNX) += fealnx.o obj-$(CONFIG_NET_VENDOR_NATSEMI) += natsemi/ +obj-$(CONFIG_NET_VENDOR_NEBULA_MATRIX) += nebula-matrix/ obj-$(CONFIG_NET_VENDOR_NETERION) += neterion/ obj-$(CONFIG_NET_VENDOR_NETRONOME) += netronome/ obj-$(CONFIG_NET_VENDOR_NI) += ni/ diff --git a/drivers/net/ethernet/nebula-matrix/Kconfig b/drivers/net/ethernet/nebula-matrix/Kconfig new file mode 100644 index 000000000000..ff786917f2bf --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/Kconfig @@ -0,0 +1,39 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Nebula-matrix network device configuration +# + +config NET_VENDOR_NEBULA_MATRIX + bool "Nebula-matrix devices" + default y + help + If you have a network (Ethernet) card belonging to this class, say Y. + + Note that the answer to this question doesn't directly affect the + kernel: saying N will just cause the configurator to skip all + the questions about Nebula-matrix cards. If you say Y, you will be + asked for your specific card in the following questions. + +if NET_VENDOR_NEBULA_MATRIX + +config NBL_CORE + tristate "Nebula-matrix Ethernet Controller m18110 Family support" + depends on PCI && VFIO + depends on ARM64 || X86_64 + default m + select PLDMFW + select PAGE_POOL + help + This driver supports Nebula-matrix Ethernet Controller m18110 Family of + devices. For more information about this product, go to the product + description with smart NIC: + + <http://www.nebula-matrix.com> + + More specific information on configuring the driver is in + <file:Documentation/networking/device_drivers/ethernet/nebula-matrix/m18110.rst>. + + To compile this driver as a module, choose M here. The module + will be called nbl_core. + +endif # NET_VENDOR_NEBULA_MATRIX diff --git a/drivers/net/ethernet/nebula-matrix/Makefile b/drivers/net/ethernet/nebula-matrix/Makefile new file mode 100644 index 000000000000..dc6bf7dcd6bf --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for the Nebula-matrix network device drivers. +# + +obj-$(CONFIG_NBL_CORE) += nbl/ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile new file mode 100644 index 000000000000..df16a3436a5c --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2025 Nebula Matrix Limited. +# Author: + +obj-$(CONFIG_NBL_CORE) := nbl_core.o + +nbl_core-objs += nbl_main.o + +# Provide include files +ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/ +ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h new file mode 100644 index 000000000000..e91de717bfe8 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_CORE_H_ +#define _NBL_CORE_H_ + +#include <linux/pci.h> +#include "nbl_include.h" +#define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1) + +#define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT) +#define NBL_CAP_IS_NET(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_NET_BIT) +#define NBL_CAP_IS_VF(val) NBL_CAP_TEST_BIT(val, NBL_CAP_IS_VF_BIT) +#define NBL_CAP_IS_NIC(val) NBL_CAP_TEST_BIT(val, NBL_CAP_IS_NIC_BIT) +#define NBL_CAP_IS_OCP(val) NBL_CAP_TEST_BIT(val, NBL_CAP_IS_OCP_BIT) +#define NBL_CAP_IS_LEONIS(val) NBL_CAP_TEST_BIT(val, NBL_CAP_IS_LEONIS_BIT) + +enum { + NBL_CAP_HAS_CTRL_BIT = 0, + NBL_CAP_HAS_NET_BIT, + NBL_CAP_IS_VF_BIT, + NBL_CAP_IS_NIC_BIT, + NBL_CAP_IS_LEONIS_BIT, + NBL_CAP_IS_OCP_BIT, +}; +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h new file mode 100644 index 000000000000..963e13927a79 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_INCLUDE_H_ +#define _NBL_INCLUDE_H_ + +#include <linux/types.h> + +/* ------ Basic definitions ------- */ +#define NBL_DRIVER_NAME "nbl_core" + +struct nbl_func_caps { + u32 has_ctrl:1; + u32 has_net:1; + u32 is_vf:1; + u32 is_nic:1; + u32 is_ocp:1; + u32 rsv:27; +}; + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c new file mode 100644 index 000000000000..ddb45144ff1c --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include <linux/aer.h> +#include "nbl_core.h" + +static int nbl_probe(struct pci_dev *pdev, + const struct pci_device_id __always_unused *id) +{ + struct device *dev = &pdev->dev; + + dev_dbg(dev, "nbl probe ok!\n"); + return 0; +} + +static void nbl_remove(struct pci_dev *pdev) +{ + dev_dbg(&pdev->dev, "nbl remove OK!\n"); +} + +#define NBL_VENDOR_ID (0x1F0F) + +/* + * Leonis DeviceID + * 0x3403-0x340d for snic v3r1 product + */ +#define NBL_DEVICE_ID_M18110 (0x3403) +#define NBL_DEVICE_ID_M18110_LX (0x3404) +#define NBL_DEVICE_ID_M18110_BASE_T (0x3405) +#define NBL_DEVICE_ID_M18110_LX_BASE_T (0x3406) +#define NBL_DEVICE_ID_M18110_OCP (0x3407) +#define NBL_DEVICE_ID_M18110_LX_OCP (0x3408) +#define NBL_DEVICE_ID_M18110_BASE_T_OCP (0x3409) +#define NBL_DEVICE_ID_M18110_LX_BASE_T_OCP (0x340a) +#define NBL_DEVICE_ID_M18000 (0x340b) +#define NBL_DEVICE_ID_M18000_LX (0x340c) +#define NBL_DEVICE_ID_M18000_BASE_T (0x340d) +#define NBL_DEVICE_ID_M18000_LX_BASE_T (0x340e) +#define NBL_DEVICE_ID_M18000_OCP (0x340f) +#define NBL_DEVICE_ID_M18000_LX_OCP (0x3410) +#define NBL_DEVICE_ID_M18000_BASE_T_OCP (0x3411) +#define NBL_DEVICE_ID_M18000_LX_BASE_T_OCP (0x3412) +#define NBL_DEVICE_ID_M18000_VF (0x3413) + +static const struct pci_device_id nbl_id_table[] = { + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_BASE_T), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_BASE_T), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_OCP), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_OCP), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_BASE_T_OCP), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_BASE_T_OCP), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_LX), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_BASE_T), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_LX_BASE_T), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_OCP), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_LX_OCP), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_BASE_T_OCP), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_LX_BASE_T_OCP), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_NIC_BIT) | + BIT(NBL_CAP_IS_LEONIS_BIT) | BIT(NBL_CAP_IS_OCP_BIT) }, + { PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18000_VF), + .driver_data = BIT(NBL_CAP_HAS_NET_BIT) | BIT(NBL_CAP_IS_VF_BIT) | + BIT(NBL_CAP_IS_NIC_BIT) | BIT(NBL_CAP_IS_LEONIS_BIT) }, + /* required as sentinel */ + { + 0, + } +}; +MODULE_DEVICE_TABLE(pci, nbl_id_table); + +static struct pci_driver nbl_driver = { + .name = NBL_DRIVER_NAME, + .id_table = nbl_id_table, + .probe = nbl_probe, + .remove = nbl_remove, +}; + +module_pci_driver(nbl_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Nebula Matrix Network Driver"); -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 02/15] net/nebula-matrix: add simple probe/remove 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 03/15] net/nebula-matrix: add HW layer definitions and implementation illusion.wang ` (13 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list Our driver architecture is relatively complex because the code is highly reusable and designed to support multiple features. Additionally, the codebase supports multiple chip variants, each with distinct hardware-software interactions. To ensure compatibility, our architecture is divided into the following layers: 1. Dev Layer (Device Layer) The top-level business logic layer where all operations are device-centric. Every operation is performed relative to the device context. The intergration of base functions encompasses: management(ctrl_dev), network(net_dev), common. 2. Service Layer The Service layer includes various netops services such as packet receiving/sending, ethtool services, management services, etc. These are provided to the upper layer for use or registered as the operations(ops) of related devices. It describes all the service capabilities possessed by the device. 3. Dispatch Layer The distribution from services to specific data operations is mainly divided into two types: direct pass-through and handling by the management PF. It shields the upper layer from the differences in specific underlying locations. It describes the processing locations and paths of the services. 4. Resource Layer Handles tasks dispatched from Dispatch Layer. These tasks fall into two categories: 4.1 Hardware control The Resource Layer further invokes the HW Layer when hardware access is needed, as only the HW Layer has OS-level privileges. 4.2 Software resource management Operations like packet statistics collection that don't require hardware access. 5. HW Layer (Hardware Layer) Serves the Resource Layer by interacting with different hardware chipsets.Writes to hardware registers to drive the hardware based on Resource Layer directives. 6. Channel Layer Handle communication between PF0 and other PF(Our PF0 has ctrl func) /PF and VF , as well as communication with the EMP (Embedded Management Processor),and provide basic interaction channels. 7. Common Layer Provides fundamental services, including Workqueue management, debug logging, and so on Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 27 +++- .../nbl/nbl_include/nbl_def_common.h | 108 ++++++++++++++ .../nbl/nbl_include/nbl_include.h | 13 +- .../nbl/nbl_include/nbl_product_base.h | 20 +++ .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 138 ++++++++++++++++++ 5 files changed, 304 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index e91de717bfe8..4e2618bef23a 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -8,7 +8,13 @@ #define _NBL_CORE_H_ #include <linux/pci.h> -#include "nbl_include.h" +#include "nbl_product_base.h" +#include "nbl_def_common.h" + +#define NBL_ADAP_TO_PDEV(adapter) ((adapter)->pdev) +#define NBL_ADAP_TO_DEV(adapter) (&((adapter)->pdev->dev)) +#define NBL_ADAP_TO_COMMON(adapter) (&((adapter)->common)) +#define NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter) ((adapter)->product_base_ops) #define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1) #define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT) @@ -26,4 +32,23 @@ enum { NBL_CAP_IS_LEONIS_BIT, NBL_CAP_IS_OCP_BIT, }; + +struct nbl_interface { +}; + +struct nbl_core { +}; + +struct nbl_adapter { + struct pci_dev *pdev; + struct nbl_core core; + struct nbl_interface intf; + struct nbl_common_info common; + struct nbl_product_base_ops *product_base_ops; + struct nbl_init_param init_param; +}; + +struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, + struct nbl_init_param *param); +void nbl_core_remove(struct nbl_adapter *adapter); #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h new file mode 100644 index 000000000000..3533b853abc4 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEF_COMMON_H_ +#define _NBL_DEF_COMMON_H_ + +#include <linux/netdev_features.h> +#include "nbl_include.h" + +#define nbl_err(common, fmt, ...) \ +do { \ + typeof(common) _common = (common); \ + dev_err(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\ +} while (0) + +#define nbl_warn(common, fmt, ...) \ +do { \ + typeof(common) _common = (common); \ + dev_warn(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\ +} while (0) + +#define nbl_info(common, fmt, ...) \ +do { \ + typeof(common) _common = (common); \ + dev_info(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\ +} while (0) + +#define nbl_debug(common, fmt, ...) \ +do { \ + typeof(common) _common = (common); \ + dev_dbg(NBL_COMMON_TO_DEV(_common), fmt, ##__VA_ARGS__);\ +} while (0) + +static inline void __maybe_unused nbl_printk(struct device *dev, int level, + const char *format, ...) +{ + struct va_format vaf; + va_list args; + + if (WARN_ONCE(level < LOGLEVEL_EMERG || level > LOGLEVEL_DEBUG, + "Level %d is out of range, set to default level\n", + level)) + level = LOGLEVEL_DEFAULT; + + va_start(args, format); + vaf.fmt = format; + vaf.va = &args; + + dev_printk_emit(level, dev, "%s %s: %pV", dev_driver_string(dev), + dev_name(dev), &vaf); + va_end(args); +} + +/* support LOGLEVEL_EMERG/LOGLEVEL_CRIT logvel */ +#define nbl_log(common, level, format, ...) \ +do { \ + typeof(common) _common = (common); \ + nbl_printk(NBL_COMMON_TO_DEV(_common), level, format, \ + ##__VA_ARGS__); \ +} while (0) + +#define NBL_COMMON_TO_PDEV(common) ((common)->pdev) +#define NBL_COMMON_TO_DEV(common) ((common)->dev) +#define NBL_COMMON_TO_DMA_DEV(common) ((common)->dma_dev) +#define NBL_COMMON_TO_VSI_ID(common) ((common)->vsi_id) +#define NBL_COMMON_TO_ETH_ID(common) ((common)->eth_id) +#define NBL_COMMON_TO_ETH_MODE(common) ((common)->eth_mode) +#define NBL_COMMON_TO_DEBUG_LVL(common) ((common)->debug_lvl) +#define NBL_COMMON_TO_VF_CAP(common) ((common)->is_vf) +#define NBL_COMMON_TO_OCP_CAP(common) ((common)->is_ocp) +#define NBL_COMMON_TO_PCI_USING_DAC(common) ((common)->pci_using_dac) +#define NBL_COMMON_TO_MGT_PF(common) ((common)->mgt_pf) +#define NBL_COMMON_TO_PCI_FUNC_ID(common) ((common)->function) +#define NBL_COMMON_TO_BOARD_ID(common) ((common)->board_id) +#define NBL_COMMON_TO_LOGIC_ETH_ID(common) ((common)->logic_eth_id) + +struct nbl_common_info { + struct pci_dev *pdev; + struct device *dev; + struct device *dma_dev; + u32 msg_enable; + u16 vsi_id; + u8 eth_id; + u8 logic_eth_id; + u8 eth_mode; + u8 is_vf; + + u8 function; + u8 devid; + u8 bus; + /* only valid for ctrldev */ + u8 hw_bus; + + u16 mgt_pf; + u8 board_id; + + bool pci_using_dac; + u8 is_ocp; + + enum nbl_product_type product_type; + + bool wol_ena; +}; + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index 963e13927a79..6f655d95d654 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -8,10 +8,15 @@ #define _NBL_INCLUDE_H_ #include <linux/types.h> - +#include <linux/netdevice.h> /* ------ Basic definitions ------- */ #define NBL_DRIVER_NAME "nbl_core" +enum nbl_product_type { + NBL_LEONIS_TYPE, + NBL_PRODUCT_MAX, +}; + struct nbl_func_caps { u32 has_ctrl:1; u32 has_net:1; @@ -21,4 +26,10 @@ struct nbl_func_caps { u32 rsv:27; }; +struct nbl_init_param { + struct nbl_func_caps caps; + enum nbl_product_type product_type; + bool pci_using_dac; +}; + #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h new file mode 100644 index 000000000000..2f530c6b112c --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_product_base.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEF_PRODUCT_BASE_H_ +#define _NBL_DEF_PRODUCT_BASE_H_ + +#include "nbl_include.h" +struct nbl_product_base_ops { + int (*hw_init)(void *p, struct nbl_init_param *param); + void (*hw_remove)(void *p); + int (*res_init)(void *p, struct nbl_init_param *param); + void (*res_remove)(void *p); + int (*chan_init)(void *p, struct nbl_init_param *param); + void (*chan_remove)(void *p); +}; + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index ddb45144ff1c..d9d79803bef5 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -7,17 +7,155 @@ #include <linux/aer.h> #include "nbl_core.h" +static struct nbl_product_base_ops nbl_product_base_ops[NBL_PRODUCT_MAX] = { + { + .hw_init = NULL, + .hw_remove = NULL, + .res_init = NULL, + .res_remove = NULL, + .chan_init = NULL, + .chan_remove = NULL, + }, +}; + +static void +nbl_core_setup_product_ops(struct nbl_adapter *adapter, + struct nbl_init_param *param, + struct nbl_product_base_ops **product_base_ops) +{ + adapter->product_base_ops = &nbl_product_base_ops[param->product_type]; + *product_base_ops = adapter->product_base_ops; +} + +struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, + struct nbl_init_param *param) +{ + struct nbl_adapter *adapter; + struct nbl_common_info *common; + struct nbl_product_base_ops *product_base_ops; + + if (!pdev) + return NULL; + + adapter = devm_kzalloc(&pdev->dev, sizeof(struct nbl_adapter), + GFP_KERNEL); + if (!adapter) + return NULL; + + adapter->pdev = pdev; + common = NBL_ADAP_TO_COMMON(adapter); + + common->pdev = pdev; + common->dev = &pdev->dev; + common->dma_dev = &pdev->dev; + common->is_vf = param->caps.is_vf; + common->is_ocp = param->caps.is_ocp; + common->pci_using_dac = param->pci_using_dac; + common->function = PCI_FUNC(pdev->devfn); + common->devid = PCI_SLOT(pdev->devfn); + common->bus = pdev->bus->number; + common->product_type = param->product_type; + + memcpy(&adapter->init_param, param, sizeof(adapter->init_param)); + + nbl_core_setup_product_ops(adapter, param, &product_base_ops); + + return adapter; +} + +void nbl_core_remove(struct nbl_adapter *adapter) +{ + struct device *dev; + + dev = NBL_ADAP_TO_DEV(adapter); + devm_kfree(dev, adapter); +} + +static void nbl_get_func_param(struct pci_dev *pdev, kernel_ulong_t driver_data, + struct nbl_init_param *param) +{ + param->caps.has_ctrl = NBL_CAP_IS_CTRL(driver_data); + param->caps.has_net = NBL_CAP_IS_NET(driver_data); + param->caps.is_vf = NBL_CAP_IS_VF(driver_data); + param->caps.is_nic = NBL_CAP_IS_NIC(driver_data); + param->caps.is_ocp = NBL_CAP_IS_OCP(driver_data); + + if (NBL_CAP_IS_LEONIS(driver_data)) + param->product_type = NBL_LEONIS_TYPE; + + /* + * Leonis only PF0 has ctrl capability, but PF0's pcie device_id + * is same with other PF.So hanle it special. + */ + if (param->product_type == NBL_LEONIS_TYPE && !param->caps.is_vf && + (PCI_FUNC(pdev->devfn) == 0)) + param->caps.has_ctrl = 1; +} + static int nbl_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *id) { struct device *dev = &pdev->dev; + struct nbl_adapter *adapter = NULL; + struct nbl_init_param param = {{0}}; + int err; + + if (pci_enable_device(pdev)) { + dev_err(&pdev->dev, "Failed to enable PCI device\n"); + return -ENODEV; + } + + param.pci_using_dac = true; + nbl_get_func_param(pdev, id->driver_data, ¶m); + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (err) { + dev_err(dev, "Configure DMA 64 bit mask failed, err = %d\n", + err); + param.pci_using_dac = false; + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); + if (err) { + dev_err(dev, + "Configure DMA 32 bit mask failed, err = %d\n", + err); + goto configure_dma_err; + } + } + + pci_set_master(pdev); + + pci_save_state(pdev); + + adapter = nbl_core_init(pdev, ¶m); + if (!adapter) { + dev_err(dev, "Nbl adapter init fail\n"); + err = -ENOMEM; + goto adapter_init_err; + } + + pci_set_drvdata(pdev, adapter); dev_dbg(dev, "nbl probe ok!\n"); return 0; +adapter_init_err: + pci_clear_master(pdev); +configure_dma_err: + pci_disable_device(pdev); + return err; } static void nbl_remove(struct pci_dev *pdev) { + struct nbl_adapter *adapter = pci_get_drvdata(pdev); + + dev_dbg(&pdev->dev, "nbl remove\n"); + if (!adapter) + return; + pci_disable_sriov(pdev); + nbl_core_remove(adapter); + + pci_clear_master(pdev); + pci_disable_device(pdev); + dev_dbg(&pdev->dev, "nbl remove OK!\n"); } -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 03/15] net/nebula-matrix: add HW layer definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 02/15] net/nebula-matrix: add simple probe/remove illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 04/15] net/nebula-matrix: add machine-generated headers and chip definitions illusion.wang ` (12 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list add HW layer related definitions and product ops Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 4 +- .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 11 ++ .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 179 ++++++++++++++++++ .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h | 13 ++ .../nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h | 156 +++++++++++++++ .../nbl/nbl_include/nbl_def_hw.h | 23 +++ .../nbl/nbl_include/nbl_include.h | 14 ++ .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 19 +- 8 files changed, 416 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index df16a3436a5c..d5cadc289366 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -4,8 +4,10 @@ obj-$(CONFIG_NBL_CORE) := nbl_core.o -nbl_core-objs += nbl_main.o +nbl_core-objs += nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \ + nbl_main.o # Provide include files ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/ +ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw ccflags-y += -I$(srctree)/drivers/net/ethernet/nebula-matrix/nbl/ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index 4e2618bef23a..33ed810ec7d0 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -9,12 +9,16 @@ #include <linux/pci.h> #include "nbl_product_base.h" +#include "nbl_def_hw.h" #include "nbl_def_common.h" #define NBL_ADAP_TO_PDEV(adapter) ((adapter)->pdev) #define NBL_ADAP_TO_DEV(adapter) (&((adapter)->pdev->dev)) #define NBL_ADAP_TO_COMMON(adapter) (&((adapter)->common)) #define NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter) ((adapter)->product_base_ops) + +#define NBL_ADAP_TO_HW_MGT(adapter) ((adapter)->core.hw_mgt) +#define NBL_ADAP_TO_HW_OPS_TBL(adapter) ((adapter)->intf.hw_ops_tbl) #define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1) #define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT) @@ -34,9 +38,16 @@ enum { }; struct nbl_interface { + struct nbl_hw_ops_tbl *hw_ops_tbl; }; struct nbl_core { + void *hw_mgt; + void *res_mgt; + void *disp_mgt; + void *serv_mgt; + void *dev_mgt; + void *chan_mgt; }; struct nbl_adapter { diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c new file mode 100644 index 000000000000..40701ff147e2 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c @@ -0,0 +1,179 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include "nbl_hw_leonis.h" + +static struct nbl_hw_ops hw_ops = { +}; + +/* Structure starts here, adding an op should not modify anything below */ +static int nbl_hw_setup_hw_mgt(struct nbl_common_info *common, + struct nbl_hw_mgt_leonis **hw_mgt_leonis) +{ + struct device *dev; + + dev = NBL_COMMON_TO_DEV(common); + *hw_mgt_leonis = + devm_kzalloc(dev, sizeof(struct nbl_hw_mgt_leonis), GFP_KERNEL); + if (!*hw_mgt_leonis) + return -ENOMEM; + + (&(*hw_mgt_leonis)->hw_mgt)->common = common; + + return 0; +} + +static void nbl_hw_remove_hw_mgt(struct nbl_common_info *common, + struct nbl_hw_mgt_leonis **hw_mgt_leonis) +{ + struct device *dev; + + dev = NBL_COMMON_TO_DEV(common); + devm_kfree(dev, *hw_mgt_leonis); + *hw_mgt_leonis = NULL; +} + +static int nbl_hw_setup_ops(struct nbl_common_info *common, + struct nbl_hw_ops_tbl **hw_ops_tbl, + struct nbl_hw_mgt_leonis *hw_mgt_leonis) +{ + struct device *dev; + + dev = NBL_COMMON_TO_DEV(common); + *hw_ops_tbl = + devm_kzalloc(dev, sizeof(struct nbl_hw_ops_tbl), GFP_KERNEL); + if (!*hw_ops_tbl) + return -ENOMEM; + + (*hw_ops_tbl)->ops = &hw_ops; + (*hw_ops_tbl)->priv = hw_mgt_leonis; + + return 0; +} + +static void nbl_hw_remove_ops(struct nbl_common_info *common, + struct nbl_hw_ops_tbl **hw_ops_tbl) +{ + struct device *dev; + + dev = NBL_COMMON_TO_DEV(common); + devm_kfree(dev, *hw_ops_tbl); + *hw_ops_tbl = NULL; +} + +int nbl_hw_init_leonis(void *p, struct nbl_init_param *param) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct nbl_common_info *common; + struct pci_dev *pdev; + struct nbl_hw_mgt_leonis **hw_mgt_leonis; + struct nbl_hw_mgt *hw_mgt; + struct nbl_hw_ops_tbl **hw_ops_tbl; + int bar_mask; + int ret = 0; + + common = NBL_ADAP_TO_COMMON(adapter); + hw_mgt_leonis = + (struct nbl_hw_mgt_leonis **)&NBL_ADAP_TO_HW_MGT(adapter); + hw_ops_tbl = &NBL_ADAP_TO_HW_OPS_TBL(adapter); + pdev = NBL_COMMON_TO_PDEV(common); + + ret = nbl_hw_setup_hw_mgt(common, hw_mgt_leonis); + if (ret) + goto setup_mgt_fail; + + hw_mgt = &(*hw_mgt_leonis)->hw_mgt; + bar_mask = BIT(NBL_MEMORY_BAR) | BIT(NBL_MAILBOX_BAR); + ret = pci_request_selected_regions(pdev, bar_mask, NBL_DRIVER_NAME); + if (ret) { + dev_err(&pdev->dev, + "Request memory bar and mailbox bar failed, err = %d\n", + ret); + goto request_bar_region_fail; + } + + if (param->caps.has_ctrl) { + hw_mgt->hw_addr = + ioremap(pci_resource_start(pdev, NBL_MEMORY_BAR), + pci_resource_len(pdev, NBL_MEMORY_BAR) - + NBL_RDMA_NOTIFY_OFF); + if (!hw_mgt->hw_addr) { + dev_err(&pdev->dev, "Memory bar ioremap failed\n"); + ret = -EIO; + goto ioremap_err; + } + hw_mgt->hw_size = pci_resource_len(pdev, NBL_MEMORY_BAR) - + NBL_RDMA_NOTIFY_OFF; + } else { + hw_mgt->hw_addr = + ioremap(pci_resource_start(pdev, NBL_MEMORY_BAR), + NBL_RDMA_NOTIFY_OFF); + if (!hw_mgt->hw_addr) { + dev_err(&pdev->dev, "Memory bar ioremap failed\n"); + ret = -EIO; + goto ioremap_err; + } + hw_mgt->hw_size = NBL_RDMA_NOTIFY_OFF; + } + + hw_mgt->notify_offset = 0; + hw_mgt->mailbox_bar_hw_addr = pci_ioremap_bar(pdev, NBL_MAILBOX_BAR); + if (!hw_mgt->mailbox_bar_hw_addr) { + dev_err(&pdev->dev, "Mailbox bar ioremap failed\n"); + ret = -EIO; + goto mailbox_ioremap_err; + } + + spin_lock_init(&hw_mgt->reg_lock); + hw_mgt->should_lock = true; + + ret = nbl_hw_setup_ops(common, hw_ops_tbl, *hw_mgt_leonis); + if (ret) + goto setup_ops_fail; + + (*hw_mgt_leonis)->ro_enable = pcie_relaxed_ordering_enabled(pdev); + + return 0; + +setup_ops_fail: + iounmap(hw_mgt->mailbox_bar_hw_addr); +mailbox_ioremap_err: + iounmap(hw_mgt->hw_addr); +ioremap_err: + pci_release_selected_regions(pdev, bar_mask); +request_bar_region_fail: + nbl_hw_remove_hw_mgt(common, hw_mgt_leonis); +setup_mgt_fail: + return ret; +} + +void nbl_hw_remove_leonis(void *p) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct nbl_common_info *common; + struct nbl_hw_mgt_leonis **hw_mgt_leonis; + struct nbl_hw_ops_tbl **hw_ops_tbl; + struct pci_dev *pdev; + u8 __iomem *hw_addr; + u8 __iomem *mailbox_bar_hw_addr; + int bar_mask = BIT(NBL_MEMORY_BAR) | BIT(NBL_MAILBOX_BAR); + + common = NBL_ADAP_TO_COMMON(adapter); + hw_mgt_leonis = + (struct nbl_hw_mgt_leonis **)&NBL_ADAP_TO_HW_MGT(adapter); + hw_ops_tbl = &NBL_ADAP_TO_HW_OPS_TBL(adapter); + pdev = NBL_COMMON_TO_PDEV(common); + + hw_addr = (*hw_mgt_leonis)->hw_mgt.hw_addr; + mailbox_bar_hw_addr = (*hw_mgt_leonis)->hw_mgt.mailbox_bar_hw_addr; + + iounmap(mailbox_bar_hw_addr); + iounmap(hw_addr); + pci_release_selected_regions(pdev, bar_mask); + nbl_hw_remove_hw_mgt(common, hw_mgt_leonis); + + nbl_hw_remove_ops(common, hw_ops_tbl); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h new file mode 100644 index 000000000000..b078b765f772 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_HW_LEONIS_H_ +#define _NBL_HW_LEONIS_H_ + +#include "nbl_core.h" +#include "nbl_hw_reg.h" + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h new file mode 100644 index 000000000000..51518bb78b4f --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_reg.h @@ -0,0 +1,156 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_HW_REG_H_ +#define _NBL_HW_REG_H_ + +#include "nbl_core.h" + +#define NBL_HW_MGT_TO_COMMON(hw_mgt) ((hw_mgt)->common) +#define NBL_HW_MGT_TO_DEV(hw_mgt) \ + NBL_COMMON_TO_DEV(NBL_HW_MGT_TO_COMMON(hw_mgt)) +#define NBL_MEMORY_BAR (0) +#define NBL_MAILBOX_BAR (2) +#define NBL_RDMA_NOTIFY_OFF (8192) + +struct nbl_hw_mgt { + struct nbl_common_info *common; + u8 __iomem *hw_addr; + u8 __iomem *mailbox_bar_hw_addr; + u64 notify_offset; + u32 version; + u32 hw_size; + spinlock_t reg_lock; /* Protect reg access */ + bool should_lock; + u8 resv[3]; + enum nbl_hw_status hw_status; +}; + +static inline u32 rd32(u8 __iomem *addr, u64 reg) +{ + return readl(addr + (reg)); +} + +static inline void wr32_barrier(u8 __iomem *addr, u64 reg, u32 value) +{ + writel((value), (addr + (reg))); +} + +static inline void nbl_hw_rd_regs(struct nbl_hw_mgt *hw_mgt, u64 reg, + u8 *data, u32 len) +{ + u32 size = len / 4; + u32 i = 0; + + if (len % 4) + return; + + if (hw_mgt->hw_status) { + for (i = 0; i < size; i++) + *(u32 *)(data + i * sizeof(u32)) = U32_MAX; + return; + } + + spin_lock(&hw_mgt->reg_lock); + + for (i = 0; i < size; i++) + *(u32 *)(data + i * sizeof(u32)) = + rd32(hw_mgt->hw_addr, reg + i * sizeof(u32)); + spin_unlock(&hw_mgt->reg_lock); +} + +static inline void nbl_hw_wr_regs(struct nbl_hw_mgt *hw_mgt, + u64 reg, const u8 *data, u32 len) +{ + u32 size = len / 4; + u32 i = 0; + + if (len % 4) + return; + + if (hw_mgt->hw_status) + return; + spin_lock(&hw_mgt->reg_lock); + for (i = 0; i < size; i++) + /* Used for emu, make sure that we won't write too frequently */ + wr32_barrier(hw_mgt->hw_addr, reg + i * sizeof(u32), + *(u32 *)(data + i * sizeof(u32))); + spin_unlock(&hw_mgt->reg_lock); +} + +static inline void nbl_hw_wr32(struct nbl_hw_mgt *hw_mgt, u64 reg, u32 value) +{ + if (hw_mgt->hw_status) + return; + + /* Used for emu, make sure that we won't write too frequently */ + wr32_barrier(hw_mgt->hw_addr, reg, value); +} + +static inline u32 nbl_hw_rd32(struct nbl_hw_mgt *hw_mgt, u64 reg) +{ + if (hw_mgt->hw_status) + return U32_MAX; + + return rd32(hw_mgt->hw_addr, reg); +} + +static inline void nbl_mbx_wr32(void *priv, u64 reg, u32 value) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + if (hw_mgt->hw_status) + return; + + writel((value), ((hw_mgt)->mailbox_bar_hw_addr + (reg))); +} + +static inline u32 nbl_mbx_rd32(void *priv, u64 reg) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + if (hw_mgt->hw_status) + return U32_MAX; + + return readl((hw_mgt)->mailbox_bar_hw_addr + (reg)); +} + +static inline void nbl_hw_read_mbx_regs(struct nbl_hw_mgt *hw_mgt, + u64 reg, u8 *data, u32 len) +{ + u32 i = 0; + + if (len % 4) + return; + + for (i = 0; i < len / 4; i++) + *(u32 *)(data + i * sizeof(u32)) = + nbl_mbx_rd32(hw_mgt, reg + i * sizeof(u32)); +} + +static inline void nbl_hw_write_mbx_regs(struct nbl_hw_mgt *hw_mgt, + u64 reg, const u8 *data, u32 len) +{ + u32 i = 0; + + if (len % 4) + return; + + for (i = 0; i < len / 4; i++) + /* Used for emu, make sure that we won't write too frequently */ + nbl_mbx_wr32(hw_mgt, reg + i * sizeof(u32), + *(u32 *)(data + i * sizeof(u32))); +} + +/* Mgt structure for each product. + * Every indivisual mgt must have the common mgt as its first member, + * and contains its unique data structure in the reset of it. + */ +struct nbl_hw_mgt_leonis { + struct nbl_hw_mgt hw_mgt; + bool ro_enable; +}; +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h new file mode 100644 index 000000000000..6ac72e26ccd6 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEF_HW_H_ +#define _NBL_DEF_HW_H_ + +#include "nbl_include.h" + +struct nbl_hw_ops { +}; + +struct nbl_hw_ops_tbl { + struct nbl_hw_ops *ops; + void *priv; +}; + +int nbl_hw_init_leonis(void *p, struct nbl_init_param *param); +void nbl_hw_remove_leonis(void *p); + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index 6f655d95d654..e620feb382c1 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -12,11 +12,25 @@ /* ------ Basic definitions ------- */ #define NBL_DRIVER_NAME "nbl_core" +#define NBL_MAX_PF 8 +#define NBL_NEXT_ID(id, max) \ + ({ \ + typeof(id) _id = (id); \ + ((_id) == (max) ? 0 : (_id) + 1); \ + }) + enum nbl_product_type { NBL_LEONIS_TYPE, NBL_PRODUCT_MAX, }; +enum nbl_hw_status { + NBL_HW_NOMAL, + /* Most hw module is not work nomal exclude pcie/emp */ + NBL_HW_FATAL_ERR, + NBL_HW_STATUS_MAX, +}; + struct nbl_func_caps { u32 has_ctrl:1; u32 has_net:1; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index d9d79803bef5..a93aa98f2316 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -9,8 +9,8 @@ static struct nbl_product_base_ops nbl_product_base_ops[NBL_PRODUCT_MAX] = { { - .hw_init = NULL, - .hw_remove = NULL, + .hw_init = nbl_hw_init_leonis, + .hw_remove = nbl_hw_remove_leonis, .res_init = NULL, .res_remove = NULL, .chan_init = NULL, @@ -33,6 +33,7 @@ struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, struct nbl_adapter *adapter; struct nbl_common_info *common; struct nbl_product_base_ops *product_base_ops; + int ret = 0; if (!pdev) return NULL; @@ -60,14 +61,28 @@ struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, nbl_core_setup_product_ops(adapter, param, &product_base_ops); + /* + *every product's hw/chan/res layer has a great difference, + *so call their own init ops + */ + ret = product_base_ops->hw_init(adapter, param); + if (ret) + goto hw_init_fail; + return adapter; +hw_init_fail: + devm_kfree(&pdev->dev, adapter); + return NULL; } void nbl_core_remove(struct nbl_adapter *adapter) { + struct nbl_product_base_ops *product_base_ops; struct device *dev; dev = NBL_ADAP_TO_DEV(adapter); + product_base_ops = NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter); + product_base_ops->hw_remove(adapter); devm_kfree(dev, adapter); } -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 04/15] net/nebula-matrix: add machine-generated headers and chip definitions 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (2 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 03/15] net/nebula-matrix: add HW layer definitions and implementation illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 05/15] net/nebula-matrix: add channel layer definitions and implementation illusion.wang ` (11 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain; charset=y, Size: 459694 bytes --] 1. nbl_hw_leonis/base/* are machine generated headers 2. nbl_hw.h/nbl_hw_leonis.h chip-related reg definitions 3. nbl_hw_leonis_regs.c P4 configuration that will be invoked during chip initialization Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 1 + .../nebula-matrix/nbl/nbl_hw/nbl_hw.h | 172 + .../nbl_hw/nbl_hw_leonis/base/nbl_datapath.h | 11 + .../nbl_hw_leonis/base/nbl_datapath_dped.h | 2152 +++++++++ .../nbl_hw_leonis/base/nbl_datapath_dstore.h | 929 ++++ .../nbl_hw_leonis/base/nbl_datapath_ucar.h | 414 ++ .../nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h | 10 + .../nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h | 665 +++ .../nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h | 1397 ++++++ .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h | 1701 ++++++++ .../nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c | 3863 +++++++++++++++++ .../nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h | 12 + 12 files changed, 11327 insertions(+) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index d5cadc289366..f5c1f8030beb 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -5,6 +5,7 @@ obj-$(CONFIG_NBL_CORE) := nbl_core.o nbl_core-objs += nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \ + nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \ nbl_main.o # Provide include files diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h new file mode 100644 index 000000000000..b88bc1db6162 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw.h @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_HW_H_ +#define _NBL_HW_H_ + +#include "nbl_include.h" + +#define NBL_MAX_ETHERNET (4) + +#define NBL_PT_PP0 0 +#define NBL_PT_LEN 3 +#define NBL_TCAM_TABLE_LEN (64) +#define NBL_MCC_ID_INVALID U16_MAX +#define NBL_KT_BYTE_LEN 40 +#define NBL_KT_BYTE_HALF_LEN 20 + +#define NBL_EM0_PT_HW_UP_TUNNEL_L2 0 +#define NBL_EM0_PT_HW_UP_L2 1 +#define NBL_EM0_PT_HW_DOWN_L2 2 +#define NBL_EM0_PT_HW_UP_LLDP_LACP 3 +#define NBL_EM0_PT_PMD_ND_UPCALL 4 +#define NBL_EM0_PT_HW_L2_UP_MULTI_MCAST 5 +#define NBL_EM0_PT_HW_L3_UP_MULTI_MCAST 6 +#define NBL_EM0_PT_HW_L2_DOWN_MULTI_MCAST 7 +#define NBL_EM0_PT_HW_L3_DOWN_MULTI_MCAST 8 +#define NBL_EM0_PT_HW_DPRBAC_IPV4 9 +#define NBL_EM0_PT_HW_DPRBAC_IPV6 10 +#define NBL_EM0_PT_HW_UL4S_IPV4 11 +#define NBL_EM0_PT_HW_UL4S_IPV6 12 + +#define NBL_PP0_PROFILE_ID_MIN (0) +#define NBL_PP0_PROFILE_ID_MAX (15) +#define NBL_PP1_PROFILE_ID_MIN (16) +#define NBL_PP1_PROFILE_ID_MAX (31) +#define NBL_PP2_PROFILE_ID_MIN (32) +#define NBL_PP2_PROFILE_ID_MAX (47) +#define NBL_PP_PROFILE_NUM (16) + +#define NBL_QID_MAP_TABLE_ENTRIES (4096) +#define NBL_EPRO_PF_RSS_RET_TBL_DEPTH (4096) +#define NBL_EPRO_RSS_RET_TBL_DEPTH (8192 * 2) +#define NBL_EPRO_RSS_ENTRY_SIZE_UNIT (16) + +#define NBL_EPRO_PF_RSS_RET_TBL_COUNT (512) +#define NBL_EPRO_PF_RSS_ENTRY_SIZE (5) + +#define NBL_EPRO_RSS_ENTRY_MAX_COUNT (512) +#define NBL_EPRO_RSS_ENTRY_MAX_SIZE (4) + +#define NBL_EPRO_RSS_SK_SIZE 40 +#define NBL_EPRO_RSS_PER_KEY_SIZE 8 +#define NBL_EPRO_RSS_KEY_NUM (NBL_EPRO_RSS_SK_SIZE / NBL_EPRO_RSS_PER_KEY_SIZE) + +enum { + NBL_HT0, + NBL_HT1, + NBL_HT_MAX, +}; + +enum { + NBL_KT_HALF_MODE, + NBL_KT_FULL_MODE, +}; + +#pragma pack(1) +union nbl_action_data { + union dport_act { + struct { + /* port_type = SET_DPORT_TYPE_ETH_LAG, set the eth and + * lag field. + */ + u16 dport_info:10; + u16 dport_type:2; + #define FWD_DPORT_TYPE_ETH (0) + #define FWD_DPORT_TYPE_LAG (1) + #define FWD_DPORT_TYPE_VSI (2) + u16 dport_id:4; + #define FWD_DPORT_ID_HOST_TLS (0) + #define FWD_DPORT_ID_ECPU_TLS (1) + #define FWD_DPORT_ID_HOST_RDMA (2) + #define FWD_DPORT_ID_ECPU_RDMA (3) + #define FWD_DPORT_ID_EMP (4) + #define FWD_DPORT_ID_BMC (5) + #define FWD_DPORT_ID_LOOP_BACK (7) + #define FWD_DPORT_ID_ETH0 (8) + #define FWD_DPORT_ID_ETH1 (9) + #define FWD_DPORT_ID_ETH2 (10) + #define FWD_DPORT_ID_ETH3 (11) + } fwd_dport; + + struct { + /* port_type = SET_DPORT_TYPE_ETH_LAG, + * set the eth and lag field. + */ + u16 eth_id:2; + u16 lag_id:2; + u16 eth_vld:1; + u16 lag_vld:1; + u16 rsv:4; + u16 port_type:2; + u16 next_stg_sel:2; + u16 upcall_flag:2; + } down; + + struct { + /* port_type = SET_DPORT_TYPE_VSI_HOST and + * SET_DPORT_TYPE_VSI_ECPU, + * set the port_id field as the vsi_id. + * port_type = SET_DPORT_TYPE_SP_PORT, set the port_id + * as the defined PORT_TYPE_SP_*. + */ + u16 port_id:10; + #define PORT_TYPE_SP_DROP (0x3FF) + #define PORT_TYPE_SP_GLB_LB (0x3FE) + #define PORT_TYPE_SP_BMC (0x3FD) + #define PORT_TYPE_SP_EMP (0x3FC) + u16 port_type:2; + #define SET_DPORT_TYPE_VSI_HOST (0) + #define SET_DPORT_TYPE_VSI_ECPU (1) + #define SET_DPORT_TYPE_ETH_LAG (2) + #define SET_DPORT_TYPE_SP_PORT (3) + u16 next_stg_sel:2; + #define NEXT_STG_SEL_NONE (0) + #define NEXT_STG_SEL_ACL_S0 (1) + #define NEXT_STG_SEL_EPRO (2) + #define NEXT_STG_SEL_BYPASS (3) + u16 upcall_flag:2; + #define AUX_KEEP_FWD_TYPE (0) + #define AUX_FWD_TYPE_NML_FWD (1) + #define AUX_FWD_TYPE_UPCALL (2) + } up; + } dport; + + struct dqueue_act { + u16 que_id:11; + u16 rsv:5; + } dqueue; + + struct mcc_id_act { + u16 mcc_id:13; + u16 pri:1; + #define NBL_MCC_PRI_HIGH (0) + #define NBL_MCC_PRI_LOW (1) + uint32_t rsv:2; + } mcc_idx; + + struct set_aux_act { + u16 nstg_val:4; + u16 nstg_vld:1; + u16 ftype_val:3; + u16 ftype_vld:1; + u16 pkt_cos_val:3; + u16 pcos_vld:1; + u16 rsv:1; + #define NBL_SET_AUX_CLR_FLG (0) + #define NBL_SET_AUX_SET_FLG (1) + #define NBL_SET_AUX_SET_AUX (2) + u16 sub_id:2; + } set_aux; + + u16 data; +}; + +#pragma pack() + +#define NBL_SPORT_ETH_OFFSET 8 + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h new file mode 100644 index 000000000000..87a0f432cbd5 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +// Code generated by interstellar. DO NOT EDIT. +// Compatible with leonis RTL tag 0710 + +#include "nbl_datapath_ucar.h" +#include "nbl_datapath_dped.h" +#include "nbl_datapath_dstore.h" diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h new file mode 100644 index 000000000000..2715ce4ae32a --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dped.h @@ -0,0 +1,2152 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + // Code generated by interstellar. DO NOT EDIT. +// Compatible with leonis RTL tag 0710 + +#ifndef NBL_DPED_H +#define NBL_DPED_H 1 + +#include <linux/types.h> + +#define NBL_DPED_BASE (0x0075C000) + +#define NBL_DPED_INT_STATUS_ADDR (0x75c000) +#define NBL_DPED_INT_STATUS_DEPTH (1) +#define NBL_DPED_INT_STATUS_WIDTH (32) +#define NBL_DPED_INT_STATUS_DWLEN (1) +union dped_int_status_u { + struct dped_int_status { + u32 pkt_length_err:1; /* [0] Default:0x0 RWC */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 RWC */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 RWC */ + u32 fsm_err:1; /* [3] Default:0x0 RWC */ + u32 cif_err:1; /* [4] Default:0x0 RWC */ + u32 input_err:1; /* [5] Default:0x0 RWC */ + u32 cfg_err:1; /* [6] Default:0x0 RWC */ + u32 data_ucor_err:1; /* [7] Default:0x0 RWC */ + u32 inmeta_ucor_err:1; /* [8] Default:0x0 RWC */ + u32 meta_ucor_err:1; /* [9] Default:0x0 RWC */ + u32 meta_cor_ecc_err:1; /* [10] Default:0x0 RWC */ + u32 fwd_atid_nomat_err:1; /* [11] Default:0x0 RWC */ + u32 meta_value_err:1; /* [12] Default:0x0 RWC */ + u32 edit_atnum_err:1; /* [13] Default:0x0 RWC */ + u32 header_oft_ovf:1; /* [14] Default:0x0 RWC */ + u32 edit_pos_err:1; /* [15] Default:0x0 RWC */ + u32 da_oft_len_ovf:1; /* [16] Default:0x0 RWC */ + u32 lxoffset_ovf:1; /* [17] Default:0x0 RWC */ + u32 add_head_ovf:1; /* [18] Default:0x0 RWC */ + u32 rsv:13; /* [31:19] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_INT_STATUS_DWLEN]; +} __packed; + +#define NBL_DPED_INT_MASK_ADDR (0x75c004) +#define NBL_DPED_INT_MASK_DEPTH (1) +#define NBL_DPED_INT_MASK_WIDTH (32) +#define NBL_DPED_INT_MASK_DWLEN (1) +union dped_int_mask_u { + struct dped_int_mask { + u32 pkt_length_err:1; /* [0] Default:0x0 RW */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 RW */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 RW */ + u32 fsm_err:1; /* [3] Default:0x0 RW */ + u32 cif_err:1; /* [4] Default:0x0 RW */ + u32 input_err:1; /* [5] Default:0x0 RW */ + u32 cfg_err:1; /* [6] Default:0x0 RW */ + u32 data_ucor_err:1; /* [7] Default:0x0 RW */ + u32 inmeta_ucor_err:1; /* [8] Default:0x0 RW */ + u32 meta_ucor_err:1; /* [9] Default:0x0 RW */ + u32 meta_cor_ecc_err:1; /* [10] Default:0x0 RW */ + u32 fwd_atid_nomat_err:1; /* [11] Default:0x1 RW */ + u32 meta_value_err:1; /* [12] Default:0x0 RW */ + u32 edit_atnum_err:1; /* [13] Default:0x0 RW */ + u32 header_oft_ovf:1; /* [14] Default:0x0 RW */ + u32 edit_pos_err:1; /* [15] Default:0x0 RW */ + u32 da_oft_len_ovf:1; /* [16] Default:0x0 RW */ + u32 lxoffset_ovf:1; /* [17] Default:0x0 RW */ + u32 add_head_ovf:1; /* [18] Default:0x0 RW */ + u32 rsv:13; /* [31:19] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_INT_MASK_DWLEN]; +} __packed; + +#define NBL_DPED_INT_SET_ADDR (0x75c008) +#define NBL_DPED_INT_SET_DEPTH (1) +#define NBL_DPED_INT_SET_WIDTH (32) +#define NBL_DPED_INT_SET_DWLEN (1) +union dped_int_set_u { + struct dped_int_set { + u32 pkt_length_err:1; /* [0] Default:0x0 WO */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 WO */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 WO */ + u32 fsm_err:1; /* [3] Default:0x0 WO */ + u32 cif_err:1; /* [4] Default:0x0 WO */ + u32 input_err:1; /* [5] Default:0x0 WO */ + u32 cfg_err:1; /* [6] Default:0x0 WO */ + u32 data_ucor_err:1; /* [7] Default:0x0 WO */ + u32 inmeta_ucor_err:1; /* [8] Default:0x0 WO */ + u32 meta_ucor_err:1; /* [9] Default:0x0 WO */ + u32 meta_cor_ecc_err:1; /* [10] Default:0x0 WO */ + u32 fwd_atid_nomat_err:1; /* [11] Default:0x0 WO */ + u32 meta_value_err:1; /* [12] Default:0x0 WO */ + u32 edit_atnum_err:1; /* [13] Default:0x0 WO */ + u32 header_oft_ovf:1; /* [14] Default:0x0 WO */ + u32 edit_pos_err:1; /* [15] Default:0x0 WO */ + u32 da_oft_len_ovf:1; /* [16] Default:0x0 WO */ + u32 lxoffset_ovf:1; /* [17] Default:0x0 WO */ + u32 add_head_ovf:1; /* [18] Default:0x0 WO */ + u32 rsv:13; /* [31:19] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_INT_SET_DWLEN]; +} __packed; + +#define NBL_DPED_INIT_DONE_ADDR (0x75c00c) +#define NBL_DPED_INIT_DONE_DEPTH (1) +#define NBL_DPED_INIT_DONE_WIDTH (32) +#define NBL_DPED_INIT_DONE_DWLEN (1) +union dped_init_done_u { + struct dped_init_done { + u32 done:1; /* [00:00] Default:0x0 RO */ + u32 rsv:31; /* [31:01] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_INIT_DONE_DWLEN]; +} __packed; + +#define NBL_DPED_PKT_LENGTH_ERR_INFO_ADDR (0x75c020) +#define NBL_DPED_PKT_LENGTH_ERR_INFO_DEPTH (1) +#define NBL_DPED_PKT_LENGTH_ERR_INFO_WIDTH (32) +#define NBL_DPED_PKT_LENGTH_ERR_INFO_DWLEN (1) +union dped_pkt_length_err_info_u { + struct dped_pkt_length_err_info { + u32 ptr_eop:1; /* [0] Default:0x0 RC */ + u32 pkt_eop:1; /* [1] Default:0x0 RC */ + u32 pkt_mod:1; /* [2] Default:0x0 RC */ + u32 rsv:29; /* [31:3] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_PKT_LENGTH_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_CIF_ERR_INFO_ADDR (0x75c040) +#define NBL_DPED_CIF_ERR_INFO_DEPTH (1) +#define NBL_DPED_CIF_ERR_INFO_WIDTH (32) +#define NBL_DPED_CIF_ERR_INFO_DWLEN (1) +union dped_cif_err_info_u { + struct dped_cif_err_info { + u32 addr:30; /* [29:0] Default:0x0 RO */ + u32 wr_err:1; /* [30] Default:0x0 RO */ + u32 ucor_err:1; /* [31] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_CIF_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_INPUT_ERR_INFO_ADDR (0x75c048) +#define NBL_DPED_INPUT_ERR_INFO_DEPTH (1) +#define NBL_DPED_INPUT_ERR_INFO_WIDTH (32) +#define NBL_DPED_INPUT_ERR_INFO_DWLEN (1) +union dped_input_err_info_u { + struct dped_input_err_info { + u32 eoc_miss:1; /* [0] Default:0x0 RC */ + u32 soc_miss:1; /* [1] Default:0x0 RC */ + u32 rsv:30; /* [31:2] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_INPUT_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_CFG_ERR_INFO_ADDR (0x75c050) +#define NBL_DPED_CFG_ERR_INFO_DEPTH (1) +#define NBL_DPED_CFG_ERR_INFO_WIDTH (32) +#define NBL_DPED_CFG_ERR_INFO_DWLEN (1) +union dped_cfg_err_info_u { + struct dped_cfg_err_info { + u32 length:1; /* [0] Default:0x0 RC */ + u32 rd_conflict:1; /* [1] Default:0x0 RC */ + u32 rd_addr:8; /* [9:2] Default:0x0 RC */ + u32 rsv:22; /* [31:10] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_CFG_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_ADDR (0x75c06c) +#define NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_DEPTH (1) +#define NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_WIDTH (32) +#define NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_DWLEN (1) +union dped_fwd_atid_nomat_err_info_u { + struct dped_fwd_atid_nomat_err_info { + u32 dport:1; /* [0] Default:0x0 RC */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_FWD_ATID_NOMAT_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_META_VALUE_ERR_INFO_ADDR (0x75c070) +#define NBL_DPED_META_VALUE_ERR_INFO_DEPTH (1) +#define NBL_DPED_META_VALUE_ERR_INFO_WIDTH (32) +#define NBL_DPED_META_VALUE_ERR_INFO_DWLEN (1) +union dped_meta_value_err_info_u { + struct dped_meta_value_err_info { + u32 sport:1; /* [0] Default:0x0 RC */ + u32 dport:1; /* [1] Default:0x0 RC */ + u32 dscp_ecn:1; /* [2] Default:0x0 RC */ + u32 tnl:1; /* [3] Default:0x0 RC */ + u32 vni:1; /* [4] Default:0x0 RC */ + u32 vni_one:1; /* [5] Default:0x0 RC */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_META_VALUE_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_EDIT_ATNUM_ERR_INFO_ADDR (0x75c078) +#define NBL_DPED_EDIT_ATNUM_ERR_INFO_DEPTH (1) +#define NBL_DPED_EDIT_ATNUM_ERR_INFO_WIDTH (32) +#define NBL_DPED_EDIT_ATNUM_ERR_INFO_DWLEN (1) +union dped_edit_atnum_err_info_u { + struct dped_edit_atnum_err_info { + u32 replace:1; /* [0] Default:0x0 RC */ + u32 del_add:1; /* [1] Default:0x0 RC */ + u32 ttl:1; /* [2] Default:0x0 RC */ + u32 dscp:1; /* [3] Default:0x0 RC */ + u32 tnl:1; /* [4] Default:0x0 RC */ + u32 sport:1; /* [5] Default:0x0 RC */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_EDIT_ATNUM_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_HEADER_OFT_OVF_ADDR (0x75c080) +#define NBL_DPED_HEADER_OFT_OVF_DEPTH (1) +#define NBL_DPED_HEADER_OFT_OVF_WIDTH (32) +#define NBL_DPED_HEADER_OFT_OVF_DWLEN (1) +union dped_header_oft_ovf_u { + struct dped_header_oft_ovf { + u32 replace:1; /* [0] Default:0x0 RC */ + u32 rsv2:7; /* [7:1] Default:0x0 RO */ + u32 add_del:6; /* [13:8] Default:0x0 RC */ + u32 dscp_ecn:1; /* [14] Default:0x0 RC */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 ttl:1; /* [16] Default:0x0 RC */ + u32 sctp:1; /* [17] Default:0x0 RC */ + u32 dscp:1; /* [18] Default:0x0 RC */ + u32 pri:1; /* [19] Default:0x0 RC */ + u32 len0:1; /* [20] Default:0x0 RC */ + u32 len1:1; /* [21] Default:0x0 RC */ + u32 ck0:1; /* [22] Default:0x0 RC */ + u32 ck1:1; /* [23] Default:0x0 RC */ + u32 ck_start0_0:1; /* [24] Default:0x0 RC */ + u32 ck_start0_1:1; /* [25] Default:0x0 RC */ + u32 ck_start1_0:1; /* [26] Default:0x0 RC */ + u32 ck_start1_1:1; /* [27] Default:0x0 RC */ + u32 head:1; /* [28] Default:0x0 RC */ + u32 ck_len0:1; /* [29] Default:0x0 RC */ + u32 ck_len1:1; /* [30] Default:0x0 RC */ + u32 rsv:1; /* [31] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_HEADER_OFT_OVF_DWLEN]; +} __packed; + +#define NBL_DPED_EDIT_POS_ERR_ADDR (0x75c088) +#define NBL_DPED_EDIT_POS_ERR_DEPTH (1) +#define NBL_DPED_EDIT_POS_ERR_WIDTH (32) +#define NBL_DPED_EDIT_POS_ERR_DWLEN (1) +union dped_edit_pos_err_u { + struct dped_edit_pos_err { + u32 replace:1; /* [0] Default:0x0 RC */ + u32 cross_level:6; /* [6:1] Default:0x0 RC */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 add_del:6; /* [13:8] Default:0x0 RC */ + u32 dscp_ecn:1; /* [14] Default:0x0 RC */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 ttl:1; /* [16] Default:0x0 RC */ + u32 sctp:1; /* [17] Default:0x0 RC */ + u32 dscp:1; /* [18] Default:0x0 RC */ + u32 pri:1; /* [19] Default:0x0 RC */ + u32 len0:1; /* [20] Default:0x0 RC */ + u32 len1:1; /* [21] Default:0x0 RC */ + u32 ck0:1; /* [22] Default:0x0 RC */ + u32 ck1:1; /* [23] Default:0x0 RC */ + u32 ck_start0_0:1; /* [24] Default:0x0 RC */ + u32 ck_start0_1:1; /* [25] Default:0x0 RC */ + u32 ck_start1_0:1; /* [26] Default:0x0 RC */ + u32 ck_start1_1:1; /* [27] Default:0x0 RC */ + u32 ck_len0:1; /* [28] Default:0x0 RC */ + u32 ck_len1:1; /* [29] Default:0x0 RC */ + u32 rsv:2; /* [31:30] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_EDIT_POS_ERR_DWLEN]; +} __packed; + +#define NBL_DPED_DA_OFT_LEN_OVF_ADDR (0x75c090) +#define NBL_DPED_DA_OFT_LEN_OVF_DEPTH (1) +#define NBL_DPED_DA_OFT_LEN_OVF_WIDTH (32) +#define NBL_DPED_DA_OFT_LEN_OVF_DWLEN (1) +union dped_da_oft_len_ovf_u { + struct dped_da_oft_len_ovf { + u32 at0:5; /* [4:0] Default:0x0 RC */ + u32 at1:5; /* [9:5] Default:0x0 RC */ + u32 at2:5; /* [14:10] Default:0x0 RC */ + u32 at3:5; /* [19:15] Default:0x0 RC */ + u32 at4:5; /* [24:20] Default:0x0 RC */ + u32 at5:5; /* [29:25] Default:0x0 RC */ + u32 rsv:2; /* [31:30] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_DA_OFT_LEN_OVF_DWLEN]; +} __packed; + +#define NBL_DPED_LXOFFSET_OVF_ADDR (0x75c098) +#define NBL_DPED_LXOFFSET_OVF_DEPTH (1) +#define NBL_DPED_LXOFFSET_OVF_WIDTH (32) +#define NBL_DPED_LXOFFSET_OVF_DWLEN (1) +union dped_lxoffset_ovf_u { + struct dped_lxoffset_ovf { + u32 l2:1; /* [0] Default:0x0 RC */ + u32 l3:1; /* [1] Default:0x0 RC */ + u32 l4:1; /* [2] Default:0x0 RC */ + u32 pld:1; /* [3] Default:0x0 RC */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_LXOFFSET_OVF_DWLEN]; +} __packed; + +#define NBL_DPED_ADD_HEAD_OVF_ADDR (0x75c0a0) +#define NBL_DPED_ADD_HEAD_OVF_DEPTH (1) +#define NBL_DPED_ADD_HEAD_OVF_WIDTH (32) +#define NBL_DPED_ADD_HEAD_OVF_DWLEN (1) +union dped_add_head_ovf_u { + struct dped_add_head_ovf { + u32 tnl_l2:1; /* [0] Default:0x0 RC */ + u32 tnl_pkt:1; /* [1] Default:0x0 RC */ + u32 rsv1:14; /* [15:2] Default:0x0 RO */ + u32 mir_l2:1; /* [16] Default:0x0 RC */ + u32 mir_pkt:1; /* [17] Default:0x0 RC */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_ADD_HEAD_OVF_DWLEN]; +} __packed; + +#define NBL_DPED_CAR_CTRL_ADDR (0x75c100) +#define NBL_DPED_CAR_CTRL_DEPTH (1) +#define NBL_DPED_CAR_CTRL_WIDTH (32) +#define NBL_DPED_CAR_CTRL_DWLEN (1) +union dped_car_ctrl_u { + struct dped_car_ctrl { + u32 sctr_car:1; /* [00:00] Default:0x1 RW */ + u32 rctr_car:1; /* [01:01] Default:0x1 RW */ + u32 rc_car:1; /* [02:02] Default:0x1 RW */ + u32 tbl_rc_car:1; /* [03:03] Default:0x1 RW */ + u32 rsv:28; /* [31:04] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_CAR_CTRL_DWLEN]; +} __packed; + +#define NBL_DPED_INIT_START_ADDR (0x75c10c) +#define NBL_DPED_INIT_START_DEPTH (1) +#define NBL_DPED_INIT_START_WIDTH (32) +#define NBL_DPED_INIT_START_DWLEN (1) +union dped_init_start_u { + struct dped_init_start { + u32 start:1; /* [00:00] Default:0x0 WO */ + u32 rsv:31; /* [31:01] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_INIT_START_DWLEN]; +} __packed; + +#define NBL_DPED_TIMEOUT_CFG_ADDR (0x75c110) +#define NBL_DPED_TIMEOUT_CFG_DEPTH (1) +#define NBL_DPED_TIMEOUT_CFG_WIDTH (32) +#define NBL_DPED_TIMEOUT_CFG_DWLEN (1) +union dped_timeout_cfg_u { + struct dped_timeout_cfg { + u32 fsm_max_num:16; /* [15:00] Default:0xfff RW */ + u32 tab:8; /* [23:16] Default:0x40 RW */ + u32 rsv:8; /* [31:24] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TIMEOUT_CFG_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_MAX_LENGTH_ADDR (0x75c154) +#define NBL_DPED_TNL_MAX_LENGTH_DEPTH (1) +#define NBL_DPED_TNL_MAX_LENGTH_WIDTH (32) +#define NBL_DPED_TNL_MAX_LENGTH_DWLEN (1) +union dped_tnl_max_length_u { + struct dped_tnl_max_length { + u32 th:7; /* [6:0] Default:0x5A RW */ + u32 rsv:25; /* [31:7] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TNL_MAX_LENGTH_DWLEN]; +} __packed; + +#define NBL_DPED_PKT_DROP_EN_ADDR (0x75c170) +#define NBL_DPED_PKT_DROP_EN_DEPTH (1) +#define NBL_DPED_PKT_DROP_EN_WIDTH (32) +#define NBL_DPED_PKT_DROP_EN_DWLEN (1) +union dped_pkt_drop_en_u { + struct dped_pkt_drop_en { + u32 en:1; /* [0] Default:0x1 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_PKT_DROP_EN_DWLEN]; +} __packed; + +#define NBL_DPED_PKT_HERR_DROP_EN_ADDR (0x75c174) +#define NBL_DPED_PKT_HERR_DROP_EN_DEPTH (1) +#define NBL_DPED_PKT_HERR_DROP_EN_WIDTH (32) +#define NBL_DPED_PKT_HERR_DROP_EN_DWLEN (1) +union dped_pkt_herr_drop_en_u { + struct dped_pkt_herr_drop_en { + u32 en:1; /* [0] Default:0x1 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_PKT_HERR_DROP_EN_DWLEN]; +} __packed; + +#define NBL_DPED_PKT_PARITY_DROP_EN_ADDR (0x75c178) +#define NBL_DPED_PKT_PARITY_DROP_EN_DEPTH (1) +#define NBL_DPED_PKT_PARITY_DROP_EN_WIDTH (32) +#define NBL_DPED_PKT_PARITY_DROP_EN_DWLEN (1) +union dped_pkt_parity_drop_en_u { + struct dped_pkt_parity_drop_en { + u32 en0:1; /* [0] Default:0x1 RW */ + u32 en1:1; /* [1] Default:0x1 RW */ + u32 rsv:30; /* [31:2] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_PKT_PARITY_DROP_EN_DWLEN]; +} __packed; + +#define NBL_DPED_TTL_DROP_EN_ADDR (0x75c17c) +#define NBL_DPED_TTL_DROP_EN_DEPTH (1) +#define NBL_DPED_TTL_DROP_EN_WIDTH (32) +#define NBL_DPED_TTL_DROP_EN_DWLEN (1) +union dped_ttl_drop_en_u { + struct dped_ttl_drop_en { + u32 en:1; /* [0] Default:0x1 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TTL_DROP_EN_DWLEN]; +} __packed; + +#define NBL_DPED_TTL_ERROR_CODE_ADDR (0x75c188) +#define NBL_DPED_TTL_ERROR_CODE_DEPTH (1) +#define NBL_DPED_TTL_ERROR_CODE_WIDTH (32) +#define NBL_DPED_TTL_ERROR_CODE_DWLEN (1) +union dped_ttl_error_code_u { + struct dped_ttl_error_code { + u32 en:1; /* [0] Default:0x1 RW */ + u32 rsv1:7; /* [7:1] Default:0x0 RO */ + u32 id:4; /* [11:8] Default:0x6 RW */ + u32 rsv:20; /* [31:12] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TTL_ERROR_CODE_DWLEN]; +} __packed; + +#define NBL_DPED_HIGH_PRI_PKT_EN_ADDR (0x75c190) +#define NBL_DPED_HIGH_PRI_PKT_EN_DEPTH (1) +#define NBL_DPED_HIGH_PRI_PKT_EN_WIDTH (32) +#define NBL_DPED_HIGH_PRI_PKT_EN_DWLEN (1) +union dped_high_pri_pkt_en_u { + struct dped_high_pri_pkt_en { + u32 en:1; /* [0] Default:0x1 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_HIGH_PRI_PKT_EN_DWLEN]; +} __packed; + +#define NBL_DPED_PADDING_CFG_ADDR (0x75c194) +#define NBL_DPED_PADDING_CFG_DEPTH (1) +#define NBL_DPED_PADDING_CFG_WIDTH (32) +#define NBL_DPED_PADDING_CFG_DWLEN (1) +union dped_padding_cfg_u { + struct dped_padding_cfg { + u32 th:6; /* [5:0] Default:0x3B RW */ + u32 rsv1:2; /* [7:6] Default:0x0 RO */ + u32 mode:2; /* [9:8] Default:0x0 RW */ + u32 rsv:22; /* [31:10] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_PADDING_CFG_DWLEN]; +} __packed; + +#define NBL_DPED_HW_EDIT_FLAG_SEL0_ADDR (0x75c204) +#define NBL_DPED_HW_EDIT_FLAG_SEL0_DEPTH (1) +#define NBL_DPED_HW_EDIT_FLAG_SEL0_WIDTH (32) +#define NBL_DPED_HW_EDIT_FLAG_SEL0_DWLEN (1) +union dped_hw_edit_flag_sel0_u { + struct dped_hw_edit_flag_sel0 { + u32 oft:5; /* [4:0] Default:0x1 RW */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_HW_EDIT_FLAG_SEL0_DWLEN]; +} __packed; + +#define NBL_DPED_HW_EDIT_FLAG_SEL1_ADDR (0x75c208) +#define NBL_DPED_HW_EDIT_FLAG_SEL1_DEPTH (1) +#define NBL_DPED_HW_EDIT_FLAG_SEL1_WIDTH (32) +#define NBL_DPED_HW_EDIT_FLAG_SEL1_DWLEN (1) +union dped_hw_edit_flag_sel1_u { + struct dped_hw_edit_flag_sel1 { + u32 oft:5; /* [4:0] Default:0x2 RW */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_HW_EDIT_FLAG_SEL1_DWLEN]; +} __packed; + +#define NBL_DPED_HW_EDIT_FLAG_SEL2_ADDR (0x75c20c) +#define NBL_DPED_HW_EDIT_FLAG_SEL2_DEPTH (1) +#define NBL_DPED_HW_EDIT_FLAG_SEL2_WIDTH (32) +#define NBL_DPED_HW_EDIT_FLAG_SEL2_DWLEN (1) +union dped_hw_edit_flag_sel2_u { + struct dped_hw_edit_flag_sel2 { + u32 oft:5; /* [4:0] Default:0x3 RW */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_HW_EDIT_FLAG_SEL2_DWLEN]; +} __packed; + +#define NBL_DPED_HW_EDIT_FLAG_SEL3_ADDR (0x75c210) +#define NBL_DPED_HW_EDIT_FLAG_SEL3_DEPTH (1) +#define NBL_DPED_HW_EDIT_FLAG_SEL3_WIDTH (32) +#define NBL_DPED_HW_EDIT_FLAG_SEL3_DWLEN (1) +union dped_hw_edit_flag_sel3_u { + struct dped_hw_edit_flag_sel3 { + u32 oft:5; /* [4:0] Default:0x4 RW */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_HW_EDIT_FLAG_SEL3_DWLEN]; +} __packed; + +#define NBL_DPED_HW_EDIT_FLAG_SEL4_ADDR (0x75c214) +#define NBL_DPED_HW_EDIT_FLAG_SEL4_DEPTH (1) +#define NBL_DPED_HW_EDIT_FLAG_SEL4_WIDTH (32) +#define NBL_DPED_HW_EDIT_FLAG_SEL4_DWLEN (1) +union dped_hw_edit_flag_sel4_u { + struct dped_hw_edit_flag_sel4 { + u32 oft:5; /* [4:0] Default:0xe RW */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_HW_EDIT_FLAG_SEL4_DWLEN]; +} __packed; + +#define NBL_DPED_RDMA_FLAG_ADDR (0x75c22c) +#define NBL_DPED_RDMA_FLAG_DEPTH (1) +#define NBL_DPED_RDMA_FLAG_WIDTH (32) +#define NBL_DPED_RDMA_FLAG_DWLEN (1) +union dped_rdma_flag_u { + struct dped_rdma_flag { + u32 oft:5; /* [4:0] Default:0xa RW */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_RDMA_FLAG_DWLEN]; +} __packed; + +#define NBL_DPED_FWD_DPORT_ADDR (0x75c230) +#define NBL_DPED_FWD_DPORT_DEPTH (1) +#define NBL_DPED_FWD_DPORT_WIDTH (32) +#define NBL_DPED_FWD_DPORT_DWLEN (1) +union dped_fwd_dport_u { + struct dped_fwd_dport { + u32 id:6; /* [5:0] Default:0x9 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_FWD_DPORT_DWLEN]; +} __packed; + +#define NBL_DPED_FWD_MIRID_ADDR (0x75c238) +#define NBL_DPED_FWD_MIRID_DEPTH (1) +#define NBL_DPED_FWD_MIRID_WIDTH (32) +#define NBL_DPED_FWD_MIRID_DWLEN (1) +union dped_fwd_mirid_u { + struct dped_fwd_mirid { + u32 id:6; /* [5:0] Default:0x8 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_FWD_MIRID_DWLEN]; +} __packed; + +#define NBL_DPED_FWD_VNI0_ADDR (0x75c244) +#define NBL_DPED_FWD_VNI0_DEPTH (1) +#define NBL_DPED_FWD_VNI0_WIDTH (32) +#define NBL_DPED_FWD_VNI0_DWLEN (1) +union dped_fwd_vni0_u { + struct dped_fwd_vni0 { + u32 id:6; /* [5:0] Default:0xe RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_FWD_VNI0_DWLEN]; +} __packed; + +#define NBL_DPED_FWD_VNI1_ADDR (0x75c248) +#define NBL_DPED_FWD_VNI1_DEPTH (1) +#define NBL_DPED_FWD_VNI1_WIDTH (32) +#define NBL_DPED_FWD_VNI1_DWLEN (1) +union dped_fwd_vni1_u { + struct dped_fwd_vni1 { + u32 id:6; /* [5:0] Default:0xf RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_FWD_VNI1_DWLEN]; +} __packed; + +#define NBL_DPED_FWD_PRI_MDF_ADDR (0x75c250) +#define NBL_DPED_FWD_PRI_MDF_DEPTH (1) +#define NBL_DPED_FWD_PRI_MDF_WIDTH (32) +#define NBL_DPED_FWD_PRI_MDF_DWLEN (1) +union dped_fwd_pri_mdf_u { + struct dped_fwd_pri_mdf { + u32 id:6; /* [5:0] Default:0x15 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_FWD_PRI_MDF_DWLEN]; +} __packed; + +#define NBL_DPED_VLAN_TYPE0_ADDR (0x75c260) +#define NBL_DPED_VLAN_TYPE0_DEPTH (1) +#define NBL_DPED_VLAN_TYPE0_WIDTH (32) +#define NBL_DPED_VLAN_TYPE0_DWLEN (1) +union dped_vlan_type0_u { + struct dped_vlan_type0 { + u32 vau:16; /* [15:0] Default:0x8100 RW */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_VLAN_TYPE0_DWLEN]; +} __packed; + +#define NBL_DPED_VLAN_TYPE1_ADDR (0x75c264) +#define NBL_DPED_VLAN_TYPE1_DEPTH (1) +#define NBL_DPED_VLAN_TYPE1_WIDTH (32) +#define NBL_DPED_VLAN_TYPE1_DWLEN (1) +union dped_vlan_type1_u { + struct dped_vlan_type1 { + u32 vau:16; /* [15:0] Default:0x88A8 RW */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_VLAN_TYPE1_DWLEN]; +} __packed; + +#define NBL_DPED_VLAN_TYPE2_ADDR (0x75c268) +#define NBL_DPED_VLAN_TYPE2_DEPTH (1) +#define NBL_DPED_VLAN_TYPE2_WIDTH (32) +#define NBL_DPED_VLAN_TYPE2_DWLEN (1) +union dped_vlan_type2_u { + struct dped_vlan_type2 { + u32 vau:16; /* [15:0] Default:0x9100 RW */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_VLAN_TYPE2_DWLEN]; +} __packed; + +#define NBL_DPED_VLAN_TYPE3_ADDR (0x75c26c) +#define NBL_DPED_VLAN_TYPE3_DEPTH (1) +#define NBL_DPED_VLAN_TYPE3_WIDTH (32) +#define NBL_DPED_VLAN_TYPE3_DWLEN (1) +union dped_vlan_type3_u { + struct dped_vlan_type3 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_VLAN_TYPE3_DWLEN]; +} __packed; + +#define NBL_DPED_L3_LEN_MDY_CMD_0_ADDR (0x75c300) +#define NBL_DPED_L3_LEN_MDY_CMD_0_DEPTH (1) +#define NBL_DPED_L3_LEN_MDY_CMD_0_WIDTH (32) +#define NBL_DPED_L3_LEN_MDY_CMD_0_DWLEN (1) +union dped_l3_len_mdy_cmd_0_u { + struct dped_l3_len_mdy_cmd_0 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 in_oft:7; /* [14:8] Default:0x2 RW */ + u32 rsv3:1; /* [15] Default:0x0 RO */ + u32 phid:2; /* [17:16] Default:0x2 RW */ + u32 rsv2:2; /* [19:18] Default:0x0 RO */ + u32 mode:2; /* [21:20] Default:0x2 RW */ + u32 rsv1:2; /* [23:22] Default:0x0 RO */ + u32 unit:1; /* [24] Default:0x0 RW */ + u32 rsv:6; /* [30:25] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L3_LEN_MDY_CMD_0_DWLEN]; +} __packed; + +#define NBL_DPED_L3_LEN_MDY_CMD_1_ADDR (0x75c304) +#define NBL_DPED_L3_LEN_MDY_CMD_1_DEPTH (1) +#define NBL_DPED_L3_LEN_MDY_CMD_1_WIDTH (32) +#define NBL_DPED_L3_LEN_MDY_CMD_1_DWLEN (1) +union dped_l3_len_mdy_cmd_1_u { + struct dped_l3_len_mdy_cmd_1 { + u32 value:8; /* [7:0] Default:0x28 RW */ + u32 in_oft:7; /* [14:8] Default:0x4 RW */ + u32 rsv3:1; /* [15] Default:0x0 RO */ + u32 phid:2; /* [17:16] Default:0x2 RW */ + u32 rsv2:2; /* [19:18] Default:0x0 RO */ + u32 mode:2; /* [21:20] Default:0x1 RW */ + u32 rsv1:2; /* [23:22] Default:0x0 RO */ + u32 unit:1; /* [24] Default:0x0 RW */ + u32 rsv:6; /* [30:25] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L3_LEN_MDY_CMD_1_DWLEN]; +} __packed; + +#define NBL_DPED_L4_LEN_MDY_CMD_0_ADDR (0x75c308) +#define NBL_DPED_L4_LEN_MDY_CMD_0_DEPTH (1) +#define NBL_DPED_L4_LEN_MDY_CMD_0_WIDTH (32) +#define NBL_DPED_L4_LEN_MDY_CMD_0_DWLEN (1) +union dped_l4_len_mdy_cmd_0_u { + struct dped_l4_len_mdy_cmd_0 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 in_oft:7; /* [14:8] Default:0xc RW */ + u32 rsv3:1; /* [15] Default:0x0 RO */ + u32 phid:2; /* [17:16] Default:0x3 RW */ + u32 rsv2:2; /* [19:18] Default:0x0 RO */ + u32 mode:2; /* [21:20] Default:0x0 RW */ + u32 rsv1:2; /* [23:22] Default:0x0 RO */ + u32 unit:1; /* [24] Default:0x1 RW */ + u32 rsv:6; /* [30:25] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_L4_LEN_MDY_CMD_0_DWLEN]; +} __packed; + +#define NBL_DPED_L4_LEN_MDY_CMD_1_ADDR (0x75c30c) +#define NBL_DPED_L4_LEN_MDY_CMD_1_DEPTH (1) +#define NBL_DPED_L4_LEN_MDY_CMD_1_WIDTH (32) +#define NBL_DPED_L4_LEN_MDY_CMD_1_DWLEN (1) +union dped_l4_len_mdy_cmd_1_u { + struct dped_l4_len_mdy_cmd_1 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 in_oft:7; /* [14:8] Default:0x4 RW */ + u32 rsv3:1; /* [15] Default:0x0 RO */ + u32 phid:2; /* [17:16] Default:0x3 RW */ + u32 rsv2:2; /* [19:18] Default:0x0 RO */ + u32 mode:2; /* [21:20] Default:0x0 RW */ + u32 rsv1:2; /* [23:22] Default:0x0 RO */ + u32 unit:1; /* [24] Default:0x1 RW */ + u32 rsv:6; /* [30:25] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_LEN_MDY_CMD_1_DWLEN]; +} __packed; + +#define NBL_DPED_L3_CK_CMD_00_ADDR (0x75c310) +#define NBL_DPED_L3_CK_CMD_00_DEPTH (1) +#define NBL_DPED_L3_CK_CMD_00_WIDTH (32) +#define NBL_DPED_L3_CK_CMD_00_DWLEN (1) +union dped_l3_ck_cmd_00_u { + struct dped_l3_ck_cmd_00 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x0 RW */ + u32 len_phid:2; /* [16:15] Default:0x0 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x0 RW */ + u32 in_oft:7; /* [25:19] Default:0xa RW */ + u32 phid:2; /* [27:26] Default:0x2 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L3_CK_CMD_00_DWLEN]; +} __packed; + +#define NBL_DPED_L3_CK_CMD_01_ADDR (0x75c314) +#define NBL_DPED_L3_CK_CMD_01_DEPTH (1) +#define NBL_DPED_L3_CK_CMD_01_WIDTH (32) +#define NBL_DPED_L3_CK_CMD_01_DWLEN (1) +union dped_l3_ck_cmd_01_u { + struct dped_l3_ck_cmd_01 { + u32 ck_start0:6; /* [5:0] Default:0x0 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x0 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x0 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_L3_CK_CMD_01_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_00_ADDR (0x75c318) +#define NBL_DPED_L4_CK_CMD_00_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_00_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_00_DWLEN (1) +union dped_l4_ck_cmd_00_u { + struct dped_l4_ck_cmd_00 { + u32 value:8; /* [7:0] Default:0x6 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x2 RW */ + u32 len_phid:2; /* [16:15] Default:0x2 RW */ + u32 len_vld:1; /* [17] Default:0x1 RW */ + u32 data_vld:1; /* [18] Default:0x1 RW */ + u32 in_oft:7; /* [25:19] Default:0x10 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_00_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_01_ADDR (0x75c31c) +#define NBL_DPED_L4_CK_CMD_01_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_01_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_01_DWLEN (1) +union dped_l4_ck_cmd_01_u { + struct dped_l4_ck_cmd_01 { + u32 ck_start0:6; /* [5:0] Default:0xc RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x8 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_01_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_10_ADDR (0x75c320) +#define NBL_DPED_L4_CK_CMD_10_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_10_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_10_DWLEN (1) +union dped_l4_ck_cmd_10_u { + struct dped_l4_ck_cmd_10 { + u32 value:8; /* [7:0] Default:0x11 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x2 RW */ + u32 len_phid:2; /* [16:15] Default:0x2 RW */ + u32 len_vld:1; /* [17] Default:0x1 RW */ + u32 data_vld:1; /* [18] Default:0x1 RW */ + u32 in_oft:7; /* [25:19] Default:0x6 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x1 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_10_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_11_ADDR (0x75c324) +#define NBL_DPED_L4_CK_CMD_11_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_11_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_11_DWLEN (1) +union dped_l4_ck_cmd_11_u { + struct dped_l4_ck_cmd_11 { + u32 ck_start0:6; /* [5:0] Default:0xc RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x8 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_11_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_20_ADDR (0x75c328) +#define NBL_DPED_L4_CK_CMD_20_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_20_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_20_DWLEN (1) +union dped_l4_ck_cmd_20_u { + struct dped_l4_ck_cmd_20 { + u32 value:8; /* [7:0] Default:0x2e RW */ + u32 len_in_oft:7; /* [14:8] Default:0x4 RW */ + u32 len_phid:2; /* [16:15] Default:0x2 RW */ + u32 len_vld:1; /* [17] Default:0x1 RW */ + u32 data_vld:1; /* [18] Default:0x1 RW */ + u32 in_oft:7; /* [25:19] Default:0x10 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_20_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_21_ADDR (0x75c32c) +#define NBL_DPED_L4_CK_CMD_21_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_21_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_21_DWLEN (1) +union dped_l4_ck_cmd_21_u { + struct dped_l4_ck_cmd_21 { + u32 ck_start0:6; /* [5:0] Default:0x8 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x20 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_21_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_30_ADDR (0x75c330) +#define NBL_DPED_L4_CK_CMD_30_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_30_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_30_DWLEN (1) +union dped_l4_ck_cmd_30_u { + struct dped_l4_ck_cmd_30 { + u32 value:8; /* [7:0] Default:0x39 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x4 RW */ + u32 len_phid:2; /* [16:15] Default:0x2 RW */ + u32 len_vld:1; /* [17] Default:0x1 RW */ + u32 data_vld:1; /* [18] Default:0x1 RW */ + u32 in_oft:7; /* [25:19] Default:0x6 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x1 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_30_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_31_ADDR (0x75c334) +#define NBL_DPED_L4_CK_CMD_31_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_31_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_31_DWLEN (1) +union dped_l4_ck_cmd_31_u { + struct dped_l4_ck_cmd_31 { + u32 ck_start0:6; /* [5:0] Default:0x8 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x20 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_31_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_40_ADDR (0x75c338) +#define NBL_DPED_L4_CK_CMD_40_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_40_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_40_DWLEN (1) +union dped_l4_ck_cmd_40_u { + struct dped_l4_ck_cmd_40 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x0 RW */ + u32 len_phid:2; /* [16:15] Default:0x0 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x0 RW */ + u32 in_oft:7; /* [25:19] Default:0x8 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x1 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_40_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_41_ADDR (0x75c33c) +#define NBL_DPED_L4_CK_CMD_41_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_41_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_41_DWLEN (1) +union dped_l4_ck_cmd_41_u { + struct dped_l4_ck_cmd_41 { + u32 ck_start0:6; /* [5:0] Default:0x0 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x0 RW */ + u32 ck_len0:7; /* [14:8] Default:0x0 RW */ + u32 ck_vld0:1; /* [15] Default:0x0 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x0 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_41_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_50_ADDR (0x75c340) +#define NBL_DPED_L4_CK_CMD_50_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_50_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_50_DWLEN (1) +union dped_l4_ck_cmd_50_u { + struct dped_l4_ck_cmd_50 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x2 RW */ + u32 len_phid:2; /* [16:15] Default:0x2 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x1 RW */ + u32 in_oft:7; /* [25:19] Default:0x2 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_50_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_51_ADDR (0x75c344) +#define NBL_DPED_L4_CK_CMD_51_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_51_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_51_DWLEN (1) +union dped_l4_ck_cmd_51_u { + struct dped_l4_ck_cmd_51 { + u32 ck_start0:6; /* [5:0] Default:0xc RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x8 RW */ + u32 ck_vld0:1; /* [15] Default:0x0 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_51_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_60_ADDR (0x75c348) +#define NBL_DPED_L4_CK_CMD_60_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_60_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_60_DWLEN (1) +union dped_l4_ck_cmd_60_u { + struct dped_l4_ck_cmd_60 { + u32 value:8; /* [7:0] Default:0x62 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x4 RW */ + u32 len_phid:2; /* [16:15] Default:0x2 RW */ + u32 len_vld:1; /* [17] Default:0x1 RW */ + u32 data_vld:1; /* [18] Default:0x1 RW */ + u32 in_oft:7; /* [25:19] Default:0x2 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_60_DWLEN]; +} __packed; + +#define NBL_DPED_L4_CK_CMD_61_ADDR (0x75c34c) +#define NBL_DPED_L4_CK_CMD_61_DEPTH (1) +#define NBL_DPED_L4_CK_CMD_61_WIDTH (32) +#define NBL_DPED_L4_CK_CMD_61_DWLEN (1) +union dped_l4_ck_cmd_61_u { + struct dped_l4_ck_cmd_61 { + u32 ck_start0:6; /* [5:0] Default:0x0 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x0 RW */ + u32 ck_len0:7; /* [14:8] Default:0x0 RW */ + u32 ck_vld0:1; /* [15] Default:0x0 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x0 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_L4_CK_CMD_61_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L3_CK_CMD_00_ADDR (0x75c350) +#define NBL_DPED_TNL_L3_CK_CMD_00_DEPTH (1) +#define NBL_DPED_TNL_L3_CK_CMD_00_WIDTH (32) +#define NBL_DPED_TNL_L3_CK_CMD_00_DWLEN (1) +union dped_tnl_l3_ck_cmd_00_u { + struct dped_tnl_l3_ck_cmd_00 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x0 RW */ + u32 len_phid:2; /* [16:15] Default:0x0 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x0 RW */ + u32 in_oft:7; /* [25:19] Default:0xa RW */ + u32 phid:2; /* [27:26] Default:0x2 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L3_CK_CMD_00_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L3_CK_CMD_01_ADDR (0x75c354) +#define NBL_DPED_TNL_L3_CK_CMD_01_DEPTH (1) +#define NBL_DPED_TNL_L3_CK_CMD_01_WIDTH (32) +#define NBL_DPED_TNL_L3_CK_CMD_01_DWLEN (1) +union dped_tnl_l3_ck_cmd_01_u { + struct dped_tnl_l3_ck_cmd_01 { + u32 ck_start0:6; /* [5:0] Default:0x0 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x0 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x0 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L3_CK_CMD_01_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_00_ADDR (0x75c360) +#define NBL_DPED_TNL_L4_CK_CMD_00_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_00_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_00_DWLEN (1) +union dped_tnl_l4_ck_cmd_00_u { + struct dped_tnl_l4_ck_cmd_00 { + u32 value:8; /* [7:0] Default:0x11 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x2 RW */ + u32 len_phid:2; /* [16:15] Default:0x2 RW */ + u32 len_vld:1; /* [17] Default:0x1 RW */ + u32 data_vld:1; /* [18] Default:0x1 RW */ + u32 in_oft:7; /* [25:19] Default:0x6 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x1 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_00_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_01_ADDR (0x75c364) +#define NBL_DPED_TNL_L4_CK_CMD_01_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_01_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_01_DWLEN (1) +union dped_tnl_l4_ck_cmd_01_u { + struct dped_tnl_l4_ck_cmd_01 { + u32 ck_start0:6; /* [5:0] Default:0xc RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x8 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_01_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_10_ADDR (0x75c368) +#define NBL_DPED_TNL_L4_CK_CMD_10_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_10_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_10_DWLEN (1) +union dped_tnl_l4_ck_cmd_10_u { + struct dped_tnl_l4_ck_cmd_10 { + u32 value:8; /* [7:0] Default:0x39 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x4 RW */ + u32 len_phid:2; /* [16:15] Default:0x2 RW */ + u32 len_vld:1; /* [17] Default:0x1 RW */ + u32 data_vld:1; /* [18] Default:0x1 RW */ + u32 in_oft:7; /* [25:19] Default:0x6 RW */ + u32 phid:2; /* [27:26] Default:0x3 RW */ + u32 flag:1; /* [28] Default:0x1 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_10_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_11_ADDR (0x75c36c) +#define NBL_DPED_TNL_L4_CK_CMD_11_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_11_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_11_DWLEN (1) +union dped_tnl_l4_ck_cmd_11_u { + struct dped_tnl_l4_ck_cmd_11 { + u32 ck_start0:6; /* [5:0] Default:0x8 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x20 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x0 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_11_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_20_ADDR (0x75c370) +#define NBL_DPED_TNL_L4_CK_CMD_20_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_20_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_20_DWLEN (1) +union dped_tnl_l4_ck_cmd_20_u { + struct dped_tnl_l4_ck_cmd_20 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x0 RW */ + u32 len_phid:2; /* [16:15] Default:0x0 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x0 RW */ + u32 in_oft:7; /* [25:19] Default:0x0 RW */ + u32 phid:2; /* [27:26] Default:0x0 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_20_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_21_ADDR (0x75c374) +#define NBL_DPED_TNL_L4_CK_CMD_21_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_21_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_21_DWLEN (1) +union dped_tnl_l4_ck_cmd_21_u { + struct dped_tnl_l4_ck_cmd_21 { + u32 ck_start0:6; /* [5:0] Default:0x8 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x20 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x14 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_21_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_30_ADDR (0x75c378) +#define NBL_DPED_TNL_L4_CK_CMD_30_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_30_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_30_DWLEN (1) +union dped_tnl_l4_ck_cmd_30_u { + struct dped_tnl_l4_ck_cmd_30 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x0 RW */ + u32 len_phid:2; /* [16:15] Default:0x0 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x0 RW */ + u32 in_oft:7; /* [25:19] Default:0x0 RW */ + u32 phid:2; /* [27:26] Default:0x0 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_30_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_31_ADDR (0x75c37c) +#define NBL_DPED_TNL_L4_CK_CMD_31_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_31_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_31_DWLEN (1) +union dped_tnl_l4_ck_cmd_31_u { + struct dped_tnl_l4_ck_cmd_31 { + u32 ck_start0:6; /* [5:0] Default:0x8 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x20 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x8 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_31_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_40_ADDR (0x75c380) +#define NBL_DPED_TNL_L4_CK_CMD_40_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_40_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_40_DWLEN (1) +union dped_tnl_l4_ck_cmd_40_u { + struct dped_tnl_l4_ck_cmd_40 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x0 RW */ + u32 len_phid:2; /* [16:15] Default:0x0 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x0 RW */ + u32 in_oft:7; /* [25:19] Default:0x0 RW */ + u32 phid:2; /* [27:26] Default:0x0 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_40_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_41_ADDR (0x75c384) +#define NBL_DPED_TNL_L4_CK_CMD_41_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_41_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_41_DWLEN (1) +union dped_tnl_l4_ck_cmd_41_u { + struct dped_tnl_l4_ck_cmd_41 { + u32 ck_start0:6; /* [5:0] Default:0x8 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x20 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x8 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_41_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_50_ADDR (0x75c388) +#define NBL_DPED_TNL_L4_CK_CMD_50_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_50_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_50_DWLEN (1) +union dped_tnl_l4_ck_cmd_50_u { + struct dped_tnl_l4_ck_cmd_50 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x0 RW */ + u32 len_phid:2; /* [16:15] Default:0x0 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x0 RW */ + u32 in_oft:7; /* [25:19] Default:0x0 RW */ + u32 phid:2; /* [27:26] Default:0x0 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_50_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_51_ADDR (0x75c38c) +#define NBL_DPED_TNL_L4_CK_CMD_51_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_51_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_51_DWLEN (1) +union dped_tnl_l4_ck_cmd_51_u { + struct dped_tnl_l4_ck_cmd_51 { + u32 ck_start0:6; /* [5:0] Default:0x8 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x20 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x8 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_51_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_60_ADDR (0x75c390) +#define NBL_DPED_TNL_L4_CK_CMD_60_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_60_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_60_DWLEN (1) +union dped_tnl_l4_ck_cmd_60_u { + struct dped_tnl_l4_ck_cmd_60 { + u32 value:8; /* [7:0] Default:0x0 RW */ + u32 len_in_oft:7; /* [14:8] Default:0x0 RW */ + u32 len_phid:2; /* [16:15] Default:0x0 RW */ + u32 len_vld:1; /* [17] Default:0x0 RW */ + u32 data_vld:1; /* [18] Default:0x0 RW */ + u32 in_oft:7; /* [25:19] Default:0x0 RW */ + u32 phid:2; /* [27:26] Default:0x0 RW */ + u32 flag:1; /* [28] Default:0x0 RW */ + u32 mode:1; /* [29] Default:0x0 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_60_DWLEN]; +} __packed; + +#define NBL_DPED_TNL_L4_CK_CMD_61_ADDR (0x75c394) +#define NBL_DPED_TNL_L4_CK_CMD_61_DEPTH (1) +#define NBL_DPED_TNL_L4_CK_CMD_61_WIDTH (32) +#define NBL_DPED_TNL_L4_CK_CMD_61_DWLEN (1) +union dped_tnl_l4_ck_cmd_61_u { + struct dped_tnl_l4_ck_cmd_61 { + u32 ck_start0:6; /* [5:0] Default:0x8 RW */ + u32 ck_phid0:2; /* [7:6] Default:0x2 RW */ + u32 ck_len0:7; /* [14:8] Default:0x20 RW */ + u32 ck_vld0:1; /* [15] Default:0x1 RW */ + u32 ck_start1:6; /* [21:16] Default:0x0 RW */ + u32 ck_phid1:2; /* [23:22] Default:0x3 RW */ + u32 ck_len1:7; /* [30:24] Default:0x8 RW */ + u32 ck_vld1:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DPED_TNL_L4_CK_CMD_61_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_00_ADDR (0x75c3a0) +#define NBL_DPED_MIR_CMD_00_DEPTH (1) +#define NBL_DPED_MIR_CMD_00_WIDTH (32) +#define NBL_DPED_MIR_CMD_00_DWLEN (1) +union dped_mir_cmd_00_u { + struct dped_mir_cmd_00 { + u32 len:7; /* [6:0] Default:0x0 RW */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 oft:7; /* [14:8] Default:0x0 RW */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 mode:1; /* [16] Default:0x0 RW */ + u32 en:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_00_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_01_ADDR (0x75c3a4) +#define NBL_DPED_MIR_CMD_01_DEPTH (1) +#define NBL_DPED_MIR_CMD_01_WIDTH (32) +#define NBL_DPED_MIR_CMD_01_DWLEN (1) +union dped_mir_cmd_01_u { + struct dped_mir_cmd_01 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 type_sel:2; /* [17:16] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_01_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_10_ADDR (0x75c3a8) +#define NBL_DPED_MIR_CMD_10_DEPTH (1) +#define NBL_DPED_MIR_CMD_10_WIDTH (32) +#define NBL_DPED_MIR_CMD_10_DWLEN (1) +union dped_mir_cmd_10_u { + struct dped_mir_cmd_10 { + u32 len:7; /* [6:0] Default:0x0 RW */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 oft:7; /* [14:8] Default:0x0 RW */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 mode:1; /* [16] Default:0x0 RW */ + u32 en:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_10_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_11_ADDR (0x75c3ac) +#define NBL_DPED_MIR_CMD_11_DEPTH (1) +#define NBL_DPED_MIR_CMD_11_WIDTH (32) +#define NBL_DPED_MIR_CMD_11_DWLEN (1) +union dped_mir_cmd_11_u { + struct dped_mir_cmd_11 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 type_sel:2; /* [17:16] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_11_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_20_ADDR (0x75c3b0) +#define NBL_DPED_MIR_CMD_20_DEPTH (1) +#define NBL_DPED_MIR_CMD_20_WIDTH (32) +#define NBL_DPED_MIR_CMD_20_DWLEN (1) +union dped_mir_cmd_20_u { + struct dped_mir_cmd_20 { + u32 len:7; /* [6:0] Default:0x0 RW */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 oft:7; /* [14:8] Default:0x0 RW */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 mode:1; /* [16] Default:0x0 RW */ + u32 en:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_20_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_21_ADDR (0x75c3b4) +#define NBL_DPED_MIR_CMD_21_DEPTH (1) +#define NBL_DPED_MIR_CMD_21_WIDTH (32) +#define NBL_DPED_MIR_CMD_21_DWLEN (1) +union dped_mir_cmd_21_u { + struct dped_mir_cmd_21 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 type_sel:2; /* [17:16] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_21_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_30_ADDR (0x75c3b8) +#define NBL_DPED_MIR_CMD_30_DEPTH (1) +#define NBL_DPED_MIR_CMD_30_WIDTH (32) +#define NBL_DPED_MIR_CMD_30_DWLEN (1) +union dped_mir_cmd_30_u { + struct dped_mir_cmd_30 { + u32 len:7; /* [6:0] Default:0x0 RW */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 oft:7; /* [14:8] Default:0x0 RW */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 mode:1; /* [16] Default:0x0 RW */ + u32 en:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_30_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_31_ADDR (0x75c3bc) +#define NBL_DPED_MIR_CMD_31_DEPTH (1) +#define NBL_DPED_MIR_CMD_31_WIDTH (32) +#define NBL_DPED_MIR_CMD_31_DWLEN (1) +union dped_mir_cmd_31_u { + struct dped_mir_cmd_31 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 type_sel:2; /* [17:16] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_31_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_40_ADDR (0x75c3c0) +#define NBL_DPED_MIR_CMD_40_DEPTH (1) +#define NBL_DPED_MIR_CMD_40_WIDTH (32) +#define NBL_DPED_MIR_CMD_40_DWLEN (1) +union dped_mir_cmd_40_u { + struct dped_mir_cmd_40 { + u32 len:7; /* [6:0] Default:0x0 RW */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 oft:7; /* [14:8] Default:0x0 RW */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 mode:1; /* [16] Default:0x0 RW */ + u32 en:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_40_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_41_ADDR (0x75c3c4) +#define NBL_DPED_MIR_CMD_41_DEPTH (1) +#define NBL_DPED_MIR_CMD_41_WIDTH (32) +#define NBL_DPED_MIR_CMD_41_DWLEN (1) +union dped_mir_cmd_41_u { + struct dped_mir_cmd_41 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 type_sel:2; /* [17:16] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_41_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_50_ADDR (0x75c3c8) +#define NBL_DPED_MIR_CMD_50_DEPTH (1) +#define NBL_DPED_MIR_CMD_50_WIDTH (32) +#define NBL_DPED_MIR_CMD_50_DWLEN (1) +union dped_mir_cmd_50_u { + struct dped_mir_cmd_50 { + u32 len:7; /* [6:0] Default:0x0 RW */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 oft:7; /* [14:8] Default:0x0 RW */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 mode:1; /* [16] Default:0x0 RW */ + u32 en:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_50_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_51_ADDR (0x75c3cc) +#define NBL_DPED_MIR_CMD_51_DEPTH (1) +#define NBL_DPED_MIR_CMD_51_WIDTH (32) +#define NBL_DPED_MIR_CMD_51_DWLEN (1) +union dped_mir_cmd_51_u { + struct dped_mir_cmd_51 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 type_sel:2; /* [17:16] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_51_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_60_ADDR (0x75c3d0) +#define NBL_DPED_MIR_CMD_60_DEPTH (1) +#define NBL_DPED_MIR_CMD_60_WIDTH (32) +#define NBL_DPED_MIR_CMD_60_DWLEN (1) +union dped_mir_cmd_60_u { + struct dped_mir_cmd_60 { + u32 len:7; /* [6:0] Default:0x0 RW */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 oft:7; /* [14:8] Default:0x0 RW */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 mode:1; /* [16] Default:0x0 RW */ + u32 en:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_60_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_61_ADDR (0x75c3d4) +#define NBL_DPED_MIR_CMD_61_DEPTH (1) +#define NBL_DPED_MIR_CMD_61_WIDTH (32) +#define NBL_DPED_MIR_CMD_61_DWLEN (1) +union dped_mir_cmd_61_u { + struct dped_mir_cmd_61 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 type_sel:2; /* [17:16] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_61_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_70_ADDR (0x75c3d8) +#define NBL_DPED_MIR_CMD_70_DEPTH (1) +#define NBL_DPED_MIR_CMD_70_WIDTH (32) +#define NBL_DPED_MIR_CMD_70_DWLEN (1) +union dped_mir_cmd_70_u { + struct dped_mir_cmd_70 { + u32 len:7; /* [6:0] Default:0x0 RW */ + u32 rsv2:1; /* [7] Default:0x0 RO */ + u32 oft:7; /* [14:8] Default:0x0 RW */ + u32 rsv1:1; /* [15] Default:0x0 RO */ + u32 mode:1; /* [16] Default:0x0 RW */ + u32 en:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_70_DWLEN]; +} __packed; + +#define NBL_DPED_MIR_CMD_71_ADDR (0x75c3dc) +#define NBL_DPED_MIR_CMD_71_DEPTH (1) +#define NBL_DPED_MIR_CMD_71_WIDTH (32) +#define NBL_DPED_MIR_CMD_71_DWLEN (1) +union dped_mir_cmd_71_u { + struct dped_mir_cmd_71 { + u32 vau:16; /* [15:0] Default:0x0 RW */ + u32 type_sel:2; /* [17:16] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIR_CMD_71_DWLEN]; +} __packed; + +#define NBL_DPED_DSCP_CK_EN_ADDR (0x75c3e8) +#define NBL_DPED_DSCP_CK_EN_DEPTH (1) +#define NBL_DPED_DSCP_CK_EN_WIDTH (32) +#define NBL_DPED_DSCP_CK_EN_DWLEN (1) +union dped_dscp_ck_en_u { + struct dped_dscp_ck_en { + u32 l4_en:1; /* [0] Default:0x0 RW */ + u32 l3_en:1; /* [1] Default:0x1 RW */ + u32 rsv:30; /* [31:2] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_DSCP_CK_EN_DWLEN]; +} __packed; + +#define NBL_DPED_RDMA_ECN_REMARK_ADDR (0x75c3f0) +#define NBL_DPED_RDMA_ECN_REMARK_DEPTH (1) +#define NBL_DPED_RDMA_ECN_REMARK_WIDTH (32) +#define NBL_DPED_RDMA_ECN_REMARK_DWLEN (1) +union dped_rdma_ecn_remark_u { + struct dped_rdma_ecn_remark { + u32 vau:2; /* [1:0] Default:0x1 RW */ + u32 rsv1:2; /* [3:2] Default:0x0 RO */ + u32 en:1; /* [4] Default:0x0 RW */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_RDMA_ECN_REMARK_DWLEN]; +} __packed; + +#define NBL_DPED_VLAN_OFFSET_ADDR (0x75c3f4) +#define NBL_DPED_VLAN_OFFSET_DEPTH (1) +#define NBL_DPED_VLAN_OFFSET_WIDTH (32) +#define NBL_DPED_VLAN_OFFSET_DWLEN (1) +union dped_vlan_offset_u { + struct dped_vlan_offset { + u32 oft:8; /* [7:0] Default:0xC RW */ + u32 rsv:24; /* [31:8] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_VLAN_OFFSET_DWLEN]; +} __packed; + +#define NBL_DPED_DSCP_OFFSET_0_ADDR (0x75c3f8) +#define NBL_DPED_DSCP_OFFSET_0_DEPTH (1) +#define NBL_DPED_DSCP_OFFSET_0_WIDTH (32) +#define NBL_DPED_DSCP_OFFSET_0_DWLEN (1) +union dped_dscp_offset_0_u { + struct dped_dscp_offset_0 { + u32 oft:8; /* [7:0] Default:0x8 RW */ + u32 rsv:24; /* [31:8] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_DSCP_OFFSET_0_DWLEN]; +} __packed; + +#define NBL_DPED_DSCP_OFFSET_1_ADDR (0x75c3fc) +#define NBL_DPED_DSCP_OFFSET_1_DEPTH (1) +#define NBL_DPED_DSCP_OFFSET_1_WIDTH (32) +#define NBL_DPED_DSCP_OFFSET_1_DWLEN (1) +union dped_dscp_offset_1_u { + struct dped_dscp_offset_1 { + u32 oft:8; /* [7:0] Default:0x4 RW */ + u32 rsv:24; /* [31:8] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_DSCP_OFFSET_1_DWLEN]; +} __packed; + +#define NBL_DPED_CFG_TEST_ADDR (0x75c600) +#define NBL_DPED_CFG_TEST_DEPTH (1) +#define NBL_DPED_CFG_TEST_WIDTH (32) +#define NBL_DPED_CFG_TEST_DWLEN (1) +union dped_cfg_test_u { + struct dped_cfg_test { + u32 test:32; /* [31:00] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_CFG_TEST_DWLEN]; +} __packed; + +#define NBL_DPED_BP_STATE_ADDR (0x75c608) +#define NBL_DPED_BP_STATE_DEPTH (1) +#define NBL_DPED_BP_STATE_WIDTH (32) +#define NBL_DPED_BP_STATE_DWLEN (1) +union dped_bp_state_u { + struct dped_bp_state { + u32 bm_rtn_tout:1; /* [0] Default:0x0 RO */ + u32 bm_not_rdy:1; /* [1] Default:0x0 RO */ + u32 dprbac_fc:1; /* [2] Default:0x0 RO */ + u32 qm_fc:1; /* [3] Default:0x0 RO */ + u32 rsv:28; /* [31:04] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_BP_STATE_DWLEN]; +} __packed; + +#define NBL_DPED_BP_HISTORY_ADDR (0x75c60c) +#define NBL_DPED_BP_HISTORY_DEPTH (1) +#define NBL_DPED_BP_HISTORY_WIDTH (32) +#define NBL_DPED_BP_HISTORY_DWLEN (1) +union dped_bp_history_u { + struct dped_bp_history { + u32 bm_rtn_tout:1; /* [0] Default:0x0 RC */ + u32 bm_not_rdy:1; /* [1] Default:0x0 RC */ + u32 dprbac_fc:1; /* [2] Default:0x0 RC */ + u32 qm_fc:1; /* [3] Default:0x0 RC */ + u32 rsv:28; /* [31:04] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_BP_HISTORY_DWLEN]; +} __packed; + +#define NBL_DPED_MIRID_IND_ADDR (0x75c900) +#define NBL_DPED_MIRID_IND_DEPTH (1) +#define NBL_DPED_MIRID_IND_WIDTH (32) +#define NBL_DPED_MIRID_IND_DWLEN (1) +union dped_mirid_ind_u { + struct dped_mirid_ind { + u32 nomat:1; /* [0] Default:0x0 RC */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MIRID_IND_DWLEN]; +} __packed; + +#define NBL_DPED_MD_AUX_OFT_ADDR (0x75c904) +#define NBL_DPED_MD_AUX_OFT_DEPTH (1) +#define NBL_DPED_MD_AUX_OFT_WIDTH (32) +#define NBL_DPED_MD_AUX_OFT_DWLEN (1) +union dped_md_aux_oft_u { + struct dped_md_aux_oft { + u32 l2_oft:8; /* [7:0] Default:0x0 RO */ + u32 l3_oft:8; /* [15:8] Default:0x0 RO */ + u32 l4_oft:8; /* [23:16] Default:0x0 RO */ + u32 pld_oft:8; /* [31:24] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_AUX_OFT_DWLEN]; +} __packed; + +#define NBL_DPED_MD_AUX_PKT_LEN_ADDR (0x75c908) +#define NBL_DPED_MD_AUX_PKT_LEN_DEPTH (1) +#define NBL_DPED_MD_AUX_PKT_LEN_WIDTH (32) +#define NBL_DPED_MD_AUX_PKT_LEN_DWLEN (1) +union dped_md_aux_pkt_len_u { + struct dped_md_aux_pkt_len { + u32 len:14; /* [13:0] Default:0x0 RO */ + u32 rsv:18; /* [31:14] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_AUX_PKT_LEN_DWLEN]; +} __packed; + +#define NBL_DPED_MD_FWD_MIR_ADDR (0x75c90c) +#define NBL_DPED_MD_FWD_MIR_DEPTH (1) +#define NBL_DPED_MD_FWD_MIR_WIDTH (32) +#define NBL_DPED_MD_FWD_MIR_DWLEN (1) +union dped_md_fwd_mir_u { + struct dped_md_fwd_mir { + u32 id:4; /* [3:0] Default:0x0 RO */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_FWD_MIR_DWLEN]; +} __packed; + +#define NBL_DPED_MD_FWD_DPORT_ADDR (0x75c910) +#define NBL_DPED_MD_FWD_DPORT_DEPTH (1) +#define NBL_DPED_MD_FWD_DPORT_WIDTH (32) +#define NBL_DPED_MD_FWD_DPORT_DWLEN (1) +union dped_md_fwd_dport_u { + struct dped_md_fwd_dport { + u32 id:16; /* [15:0] Default:0x0 RO */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_FWD_DPORT_DWLEN]; +} __packed; + +#define NBL_DPED_MD_AUX_PLD_CKSUM_ADDR (0x75c914) +#define NBL_DPED_MD_AUX_PLD_CKSUM_DEPTH (1) +#define NBL_DPED_MD_AUX_PLD_CKSUM_WIDTH (32) +#define NBL_DPED_MD_AUX_PLD_CKSUM_DWLEN (1) +union dped_md_aux_pld_cksum_u { + struct dped_md_aux_pld_cksum { + u32 ck:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_AUX_PLD_CKSUM_DWLEN]; +} __packed; + +#define NBL_DPED_INNER_PKT_CKSUM_ADDR (0x75c918) +#define NBL_DPED_INNER_PKT_CKSUM_DEPTH (1) +#define NBL_DPED_INNER_PKT_CKSUM_WIDTH (32) +#define NBL_DPED_INNER_PKT_CKSUM_DWLEN (1) +union dped_inner_pkt_cksum_u { + struct dped_inner_pkt_cksum { + u32 ck:16; /* [15:0] Default:0x0 RO */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_INNER_PKT_CKSUM_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_0_ADDR (0x75c920) +#define NBL_DPED_MD_EDIT_0_DEPTH (1) +#define NBL_DPED_MD_EDIT_0_WIDTH (32) +#define NBL_DPED_MD_EDIT_0_DWLEN (1) +union dped_md_edit_0_u { + struct dped_md_edit_0 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_0_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_1_ADDR (0x75c924) +#define NBL_DPED_MD_EDIT_1_DEPTH (1) +#define NBL_DPED_MD_EDIT_1_WIDTH (32) +#define NBL_DPED_MD_EDIT_1_DWLEN (1) +union dped_md_edit_1_u { + struct dped_md_edit_1 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_1_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_2_ADDR (0x75c928) +#define NBL_DPED_MD_EDIT_2_DEPTH (1) +#define NBL_DPED_MD_EDIT_2_WIDTH (32) +#define NBL_DPED_MD_EDIT_2_DWLEN (1) +union dped_md_edit_2_u { + struct dped_md_edit_2 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_2_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_3_ADDR (0x75c92c) +#define NBL_DPED_MD_EDIT_3_DEPTH (1) +#define NBL_DPED_MD_EDIT_3_WIDTH (32) +#define NBL_DPED_MD_EDIT_3_DWLEN (1) +union dped_md_edit_3_u { + struct dped_md_edit_3 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_3_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_4_ADDR (0x75c930) +#define NBL_DPED_MD_EDIT_4_DEPTH (1) +#define NBL_DPED_MD_EDIT_4_WIDTH (32) +#define NBL_DPED_MD_EDIT_4_DWLEN (1) +union dped_md_edit_4_u { + struct dped_md_edit_4 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_4_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_5_ADDR (0x75c934) +#define NBL_DPED_MD_EDIT_5_DEPTH (1) +#define NBL_DPED_MD_EDIT_5_WIDTH (32) +#define NBL_DPED_MD_EDIT_5_DWLEN (1) +union dped_md_edit_5_u { + struct dped_md_edit_5 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_5_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_6_ADDR (0x75c938) +#define NBL_DPED_MD_EDIT_6_DEPTH (1) +#define NBL_DPED_MD_EDIT_6_WIDTH (32) +#define NBL_DPED_MD_EDIT_6_DWLEN (1) +union dped_md_edit_6_u { + struct dped_md_edit_6 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_6_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_7_ADDR (0x75c93c) +#define NBL_DPED_MD_EDIT_7_DEPTH (1) +#define NBL_DPED_MD_EDIT_7_WIDTH (32) +#define NBL_DPED_MD_EDIT_7_DWLEN (1) +union dped_md_edit_7_u { + struct dped_md_edit_7 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_7_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_8_ADDR (0x75c940) +#define NBL_DPED_MD_EDIT_8_DEPTH (1) +#define NBL_DPED_MD_EDIT_8_WIDTH (32) +#define NBL_DPED_MD_EDIT_8_DWLEN (1) +union dped_md_edit_8_u { + struct dped_md_edit_8 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_8_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_9_ADDR (0x75c944) +#define NBL_DPED_MD_EDIT_9_DEPTH (1) +#define NBL_DPED_MD_EDIT_9_WIDTH (32) +#define NBL_DPED_MD_EDIT_9_DWLEN (1) +union dped_md_edit_9_u { + struct dped_md_edit_9 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_9_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_10_ADDR (0x75c948) +#define NBL_DPED_MD_EDIT_10_DEPTH (1) +#define NBL_DPED_MD_EDIT_10_WIDTH (32) +#define NBL_DPED_MD_EDIT_10_DWLEN (1) +union dped_md_edit_10_u { + struct dped_md_edit_10 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_10_DWLEN]; +} __packed; + +#define NBL_DPED_MD_EDIT_11_ADDR (0x75c94c) +#define NBL_DPED_MD_EDIT_11_DEPTH (1) +#define NBL_DPED_MD_EDIT_11_WIDTH (32) +#define NBL_DPED_MD_EDIT_11_DWLEN (1) +union dped_md_edit_11_u { + struct dped_md_edit_11 { + u32 vau:16; /* [15:0] Default:0x0 RO */ + u32 id:6; /* [21:16] Default:0x0 RO */ + u32 rsv:10; /* [31:22] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_MD_EDIT_11_DWLEN]; +} __packed; + +#define NBL_DPED_ADD_DEL_LEN_ADDR (0x75c950) +#define NBL_DPED_ADD_DEL_LEN_DEPTH (1) +#define NBL_DPED_ADD_DEL_LEN_WIDTH (32) +#define NBL_DPED_ADD_DEL_LEN_DWLEN (1) +union dped_add_del_len_u { + struct dped_add_del_len { + u32 len:9; /* [8:0] Default:0x0 RO */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_ADD_DEL_LEN_DWLEN]; +} __packed; + +#define NBL_DPED_TTL_INFO_ADDR (0x75c970) +#define NBL_DPED_TTL_INFO_DEPTH (1) +#define NBL_DPED_TTL_INFO_WIDTH (32) +#define NBL_DPED_TTL_INFO_DWLEN (1) +union dped_ttl_info_u { + struct dped_ttl_info { + u32 old_ttl:8; /* [7:0] Default:0x0 RO */ + u32 new_ttl:8; /* [15:8] Default:0x0 RO */ + u32 ttl_val:1; /* [16] Default:0x0 RC */ + u32 rsv:15; /* [31:17] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TTL_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_LEN_INFO_VLD_ADDR (0x75c974) +#define NBL_DPED_LEN_INFO_VLD_DEPTH (1) +#define NBL_DPED_LEN_INFO_VLD_WIDTH (32) +#define NBL_DPED_LEN_INFO_VLD_DWLEN (1) +union dped_len_info_vld_u { + struct dped_len_info_vld { + u32 length0:1; /* [0] Default:0x0 RC */ + u32 length1:1; /* [1] Default:0x0 RC */ + u32 rsv:30; /* [31:2] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_LEN_INFO_VLD_DWLEN]; +} __packed; + +#define NBL_DPED_LEN0_INFO_ADDR (0x75c978) +#define NBL_DPED_LEN0_INFO_DEPTH (1) +#define NBL_DPED_LEN0_INFO_WIDTH (32) +#define NBL_DPED_LEN0_INFO_DWLEN (1) +union dped_len0_info_u { + struct dped_len0_info { + u32 old_len:16; /* [15:0] Default:0x0 RO */ + u32 new_len:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_LEN0_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_LEN1_INFO_ADDR (0x75c97c) +#define NBL_DPED_LEN1_INFO_DEPTH (1) +#define NBL_DPED_LEN1_INFO_WIDTH (32) +#define NBL_DPED_LEN1_INFO_DWLEN (1) +union dped_len1_info_u { + struct dped_len1_info { + u32 old_len:16; /* [15:0] Default:0x0 RO */ + u32 new_len:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_LEN1_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_EDIT_ATNUM_INFO_ADDR (0x75c980) +#define NBL_DPED_EDIT_ATNUM_INFO_DEPTH (1) +#define NBL_DPED_EDIT_ATNUM_INFO_WIDTH (32) +#define NBL_DPED_EDIT_ATNUM_INFO_DWLEN (1) +union dped_edit_atnum_info_u { + struct dped_edit_atnum_info { + u32 replace:4; /* [3:0] Default:0x0 RO */ + u32 del:4; /* [7:4] Default:0x0 RO */ + u32 add:4; /* [11:8] Default:0x0 RO */ + u32 ttl:4; /* [15:12] Default:0x0 RO */ + u32 dscp:4; /* [19:16] Default:0x0 RO */ + u32 tnl:4; /* [23:20] Default:0x0 RO */ + u32 sport:4; /* [27:24] Default:0x0 RO */ + u32 rsv:4; /* [31:28] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_EDIT_ATNUM_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_EDIT_NO_AT_INFO_ADDR (0x75c984) +#define NBL_DPED_EDIT_NO_AT_INFO_DEPTH (1) +#define NBL_DPED_EDIT_NO_AT_INFO_WIDTH (32) +#define NBL_DPED_EDIT_NO_AT_INFO_DWLEN (1) +union dped_edit_no_at_info_u { + struct dped_edit_no_at_info { + u32 l3_len:1; /* [0] Default:0x0 RC */ + u32 l4_len:1; /* [1] Default:0x0 RC */ + u32 l3_ck:1; /* [2] Default:0x0 RC */ + u32 l4_ck:1; /* [3] Default:0x0 RC */ + u32 sctp_ck:1; /* [4] Default:0x0 RC */ + u32 padding:1; /* [5] Default:0x0 RC */ + u32 rsv:26; /* [31:06] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_EDIT_NO_AT_INFO_DWLEN]; +} __packed; + +#define NBL_DPED_HW_EDT_PROF_ADDR (0x75d000) +#define NBL_DPED_HW_EDT_PROF_DEPTH (32) +#define NBL_DPED_HW_EDT_PROF_WIDTH (32) +#define NBL_DPED_HW_EDT_PROF_DWLEN (1) +union dped_hw_edt_prof_u { + struct dped_hw_edt_prof { + u32 l4_len:2; /* [1:0] Default:0x2 RW */ + u32 l3_len:2; /* [3:2] Default:0x2 RW */ + u32 l4_ck:3; /* [6:4] Default:0x7 RW */ + u32 l3_ck:1; /* [7:7] Default:0x0 RW */ + u32 l4_ck_zero_free:1; /* [8:8] Default:0x1 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_HW_EDT_PROF_DWLEN]; +} __packed; +#define NBL_DPED_HW_EDT_PROF_REG(r) (NBL_DPED_HW_EDT_PROF_ADDR + \ + (NBL_DPED_HW_EDT_PROF_DWLEN * 4) * (r)) + +#define NBL_DPED_OUT_MASK_ADDR (0x75e000) +#define NBL_DPED_OUT_MASK_DEPTH (24) +#define NBL_DPED_OUT_MASK_WIDTH (64) +#define NBL_DPED_OUT_MASK_DWLEN (2) +union dped_out_mask_u { + struct dped_out_mask { + u32 flag:32; /* [31:0] Default:0x0 RW */ + u32 fwd:30; /* [61:32] Default:0x0 RW */ + u32 rsv:2; /* [63:62] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_OUT_MASK_DWLEN]; +} __packed; +#define NBL_DPED_OUT_MASK_REG(r) (NBL_DPED_OUT_MASK_ADDR + \ + (NBL_DPED_OUT_MASK_DWLEN * 4) * (r)) + +#define NBL_DPED_TAB_EDIT_CMD_ADDR (0x75f000) +#define NBL_DPED_TAB_EDIT_CMD_DEPTH (32) +#define NBL_DPED_TAB_EDIT_CMD_WIDTH (32) +#define NBL_DPED_TAB_EDIT_CMD_DWLEN (1) +union dped_tab_edit_cmd_u { + struct dped_tab_edit_cmd { + u32 in_offset:8; /* [7:0] Default:0x0 RW */ + u32 phid:2; /* [9:8] Default:0x0 RW */ + u32 len:7; /* [16:10] Default:0x0 RW */ + u32 mode:4; /* [20:17] Default:0xf RW */ + u32 l4_ck_ofld_upt:1; /* [21] Default:0x1 RW */ + u32 l3_ck_ofld_upt:1; /* [22] Default:0x1 RW */ + u32 rsv:9; /* [31:23] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TAB_EDIT_CMD_DWLEN]; +} __packed; +#define NBL_DPED_TAB_EDIT_CMD_REG(r) (NBL_DPED_TAB_EDIT_CMD_ADDR + \ + (NBL_DPED_TAB_EDIT_CMD_DWLEN * 4) * (r)) + +#define NBL_DPED_TAB_MIR_ADDR (0x760000) +#define NBL_DPED_TAB_MIR_DEPTH (8) +#define NBL_DPED_TAB_MIR_WIDTH (1024) +#define NBL_DPED_TAB_MIR_DWLEN (32) +union dped_tab_mir_u { + struct dped_tab_mir { + u32 cfg_mir_data:16; /* [719:0] Default:0x0 RW */ + u32 cfg_mir_data_arr[22]; /* [719:0] Default:0x0 RW */ + u32 cfg_mir_info_l:32; /* [755:720] Default:0x0 RW */ + u32 cfg_mir_info_h:4; /* [755:720] Default:0x0 RW */ + u32 rsv:12; /* [1023:756] Default:0x0 RO */ + u32 rsv_arr[8]; /* [1023:756] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TAB_MIR_DWLEN]; +} __packed; +#define NBL_DPED_TAB_MIR_REG(r) (NBL_DPED_TAB_MIR_ADDR + \ + (NBL_DPED_TAB_MIR_DWLEN * 4) * (r)) + +#define NBL_DPED_TAB_VSI_TYPE_ADDR (0x761000) +#define NBL_DPED_TAB_VSI_TYPE_DEPTH (1031) +#define NBL_DPED_TAB_VSI_TYPE_WIDTH (32) +#define NBL_DPED_TAB_VSI_TYPE_DWLEN (1) +union dped_tab_vsi_type_u { + struct dped_tab_vsi_type { + u32 sel:4; /* [3:0] Default:0x0 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TAB_VSI_TYPE_DWLEN]; +} __packed; +#define NBL_DPED_TAB_VSI_TYPE_REG(r) (NBL_DPED_TAB_VSI_TYPE_ADDR + \ + (NBL_DPED_TAB_VSI_TYPE_DWLEN * 4) * (r)) + +#define NBL_DPED_TAB_REPLACE_ADDR (0x763000) +#define NBL_DPED_TAB_REPLACE_DEPTH (2048) +#define NBL_DPED_TAB_REPLACE_WIDTH (64) +#define NBL_DPED_TAB_REPLACE_DWLEN (2) +union dped_tab_replace_u { + struct dped_tab_replace { + u32 vau_arr[2]; /* [63:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DPED_TAB_REPLACE_DWLEN]; +} __packed; +#define NBL_DPED_TAB_REPLACE_REG(r) (NBL_DPED_TAB_REPLACE_ADDR + \ + (NBL_DPED_TAB_REPLACE_DWLEN * 4) * (r)) + +#define NBL_DPED_TAB_TNL_ADDR (0x7dc000) +#define NBL_DPED_TAB_TNL_DEPTH (4096) +#define NBL_DPED_TAB_TNL_WIDTH (1024) +#define NBL_DPED_TAB_TNL_DWLEN (32) +union dped_tab_tnl_u { + struct dped_tab_tnl { + u32 cfg_tnl_data:16; /* [719:0] Default:0x0 RW */ + u32 cfg_tnl_data_arr[22]; /* [719:0] Default:0x0 RW */ + u32 cfg_tnl_info:8; /* [791:720] Default:0x0 RW */ + u32 cfg_tnl_info_arr[2]; /* [791:720] Default:0x0 RW */ + u32 rsv_l:32; /* [1023:792] Default:0x0 RO */ + u32 rsv_h:8; /* [1023:792] Default:0x0 RO */ + u32 rsv_arr[6]; /* [1023:792] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DPED_TAB_TNL_DWLEN]; +} __packed; +#define NBL_DPED_TAB_TNL_REG(r) (NBL_DPED_TAB_TNL_ADDR + \ + (NBL_DPED_TAB_TNL_DWLEN * 4) * (r)) + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h new file mode 100644 index 000000000000..554ef4592189 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_dstore.h @@ -0,0 +1,929 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +// Code generated by interstellar. DO NOT EDIT. +// Compatible with leonis RTL tag 0710 + +#ifndef NBL_DSTORE_H +#define NBL_DSTORE_H 1 + +#include <linux/types.h> + +#define NBL_DSTORE_BASE (0x00704000) + +#define NBL_DSTORE_INT_STATUS_ADDR (0x704000) +#define NBL_DSTORE_INT_STATUS_DEPTH (1) +#define NBL_DSTORE_INT_STATUS_WIDTH (32) +#define NBL_DSTORE_INT_STATUS_DWLEN (1) +union dstore_int_status_u { + struct dstore_int_status { + u32 ucor_err:1; /* [0] Default:0x0 RWC */ + u32 cor_err:1; /* [1] Default:0x0 RWC */ + u32 fifo_uflw_err:1; /* [2] Default:0x0 RWC */ + u32 fifo_dflw_err:1; /* [3] Default:0x0 RWC */ + u32 cif_err:1; /* [4] Default:0x0 RWC */ + u32 parity_err:1; /* [5] Default:0x0 RWC */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_INT_STATUS_DWLEN]; +} __packed; + +#define NBL_DSTORE_INT_MASK_ADDR (0x704004) +#define NBL_DSTORE_INT_MASK_DEPTH (1) +#define NBL_DSTORE_INT_MASK_WIDTH (32) +#define NBL_DSTORE_INT_MASK_DWLEN (1) +union dstore_int_mask_u { + struct dstore_int_mask { + u32 ucor_err:1; /* [0] Default:0x0 RW */ + u32 cor_err:1; /* [1] Default:0x0 RW */ + u32 fifo_uflw_err:1; /* [2] Default:0x0 RW */ + u32 fifo_dflw_err:1; /* [3] Default:0x0 RW */ + u32 cif_err:1; /* [4] Default:0x0 RW */ + u32 parity_err:1; /* [5] Default:0x0 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_INT_MASK_DWLEN]; +} __packed; + +#define NBL_DSTORE_INT_SET_ADDR (0x704008) +#define NBL_DSTORE_INT_SET_DEPTH (0) +#define NBL_DSTORE_INT_SET_WIDTH (32) +#define NBL_DSTORE_INT_SET_DWLEN (1) +union dstore_int_set_u { + struct dstore_int_set { + u32 ucor_err:1; /* [0] Default:0x0 WO */ + u32 cor_err:1; /* [1] Default:0x0 WO */ + u32 fifo_uflw_err:1; /* [2] Default:0x0 WO */ + u32 fifo_dflw_err:1; /* [3] Default:0x0 WO */ + u32 cif_err:1; /* [4] Default:0x0 WO */ + u32 parity_err:1; /* [5] Default:0x0 WO */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_INT_SET_DWLEN]; +} __packed; + +#define NBL_DSTORE_COR_ERR_INFO_ADDR (0x70400c) +#define NBL_DSTORE_COR_ERR_INFO_DEPTH (1) +#define NBL_DSTORE_COR_ERR_INFO_WIDTH (32) +#define NBL_DSTORE_COR_ERR_INFO_DWLEN (1) +union dstore_cor_err_info_u { + struct dstore_cor_err_info { + u32 ram_addr:10; /* [9:0] Default:0x0 RO */ + u32 rsv1:6; /* [15:10] Default:0x0 RO */ + u32 ram_id:4; /* [19:16] Default:0x0 RO */ + u32 rsv:12; /* [31:20] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_COR_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DSTORE_PARITY_ERR_INFO_ADDR (0x704014) +#define NBL_DSTORE_PARITY_ERR_INFO_DEPTH (1) +#define NBL_DSTORE_PARITY_ERR_INFO_WIDTH (32) +#define NBL_DSTORE_PARITY_ERR_INFO_DWLEN (1) +union dstore_parity_err_info_u { + struct dstore_parity_err_info { + u32 ram_addr:10; /* [9:0] Default:0x0 RO */ + u32 rsv1:6; /* [15:10] Default:0x0 RO */ + u32 ram_id:4; /* [19:16] Default:0x0 RO */ + u32 rsv:12; /* [31:20] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_PARITY_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DSTORE_CIF_ERR_INFO_ADDR (0x70401c) +#define NBL_DSTORE_CIF_ERR_INFO_DEPTH (1) +#define NBL_DSTORE_CIF_ERR_INFO_WIDTH (32) +#define NBL_DSTORE_CIF_ERR_INFO_DWLEN (1) +union dstore_cif_err_info_u { + struct dstore_cif_err_info { + u32 addr:30; /* [29:0] Default:0x0 RO */ + u32 wr_err:1; /* [30] Default:0x0 RO */ + u32 ucor_err:1; /* [31] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_CIF_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_DSTORE_CAR_CTRL_ADDR (0x704100) +#define NBL_DSTORE_CAR_CTRL_DEPTH (1) +#define NBL_DSTORE_CAR_CTRL_WIDTH (32) +#define NBL_DSTORE_CAR_CTRL_DWLEN (1) +union dstore_car_ctrl_u { + struct dstore_car_ctrl { + u32 sctr_car:1; /* [0] Default:0x1 RW */ + u32 rctr_car:1; /* [1] Default:0x1 RW */ + u32 rc_car:1; /* [2] Default:0x1 RW */ + u32 tbl_rc_car:1; /* [3] Default:0x1 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_CAR_CTRL_DWLEN]; +} __packed; + +#define NBL_DSTORE_INIT_START_ADDR (0x704104) +#define NBL_DSTORE_INIT_START_DEPTH (1) +#define NBL_DSTORE_INIT_START_WIDTH (32) +#define NBL_DSTORE_INIT_START_DWLEN (1) +union dstore_init_start_u { + struct dstore_init_start { + u32 init_start:1; /* [0] Default:0x0 WO */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_INIT_START_DWLEN]; +} __packed; + +#define NBL_DSTORE_PKT_LEN_ADDR (0x704108) +#define NBL_DSTORE_PKT_LEN_DEPTH (1) +#define NBL_DSTORE_PKT_LEN_WIDTH (32) +#define NBL_DSTORE_PKT_LEN_DWLEN (1) +union dstore_pkt_len_u { + struct dstore_pkt_len { + u32 min:7; /* [6:0] Default:60 RW */ + u32 rsv1:8; /* [14:7] Default:0x0 RO */ + u32 min_chk_en:1; /* [15] Default:0x0 RW */ + u32 max:14; /* [29:16] Default:9600 RW */ + u32 rsv:1; /* [30] Default:0x0 RO */ + u32 max_chk_en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DSTORE_PKT_LEN_DWLEN]; +} __packed; + +#define NBL_DSTORE_SCH_PD_BUFFER_TH_ADDR (0x704128) +#define NBL_DSTORE_SCH_PD_BUFFER_TH_DEPTH (1) +#define NBL_DSTORE_SCH_PD_BUFFER_TH_WIDTH (32) +#define NBL_DSTORE_SCH_PD_BUFFER_TH_DWLEN (1) +union dstore_sch_pd_buffer_th_u { + struct dstore_sch_pd_buffer_th { + u32 aful_th:9; /* [8:0] Default:500 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_SCH_PD_BUFFER_TH_DWLEN]; +} __packed; + +#define NBL_DSTORE_GLB_FC_TH_ADDR (0x70412c) +#define NBL_DSTORE_GLB_FC_TH_DEPTH (1) +#define NBL_DSTORE_GLB_FC_TH_WIDTH (32) +#define NBL_DSTORE_GLB_FC_TH_DWLEN (1) +union dstore_glb_fc_th_u { + struct dstore_glb_fc_th { + u32 xoff_th:10; /* [9:0] Default:900 RW */ + u32 rsv1:6; /* [15:10] Default:0x0 RO */ + u32 xon_th:10; /* [25:16] Default:850 RW */ + u32 rsv:5; /* [30:26] Default:0x0 RO */ + u32 fc_en:1; /* [31:31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DSTORE_GLB_FC_TH_DWLEN]; +} __packed; + +#define NBL_DSTORE_GLB_DROP_TH_ADDR (0x704130) +#define NBL_DSTORE_GLB_DROP_TH_DEPTH (1) +#define NBL_DSTORE_GLB_DROP_TH_WIDTH (32) +#define NBL_DSTORE_GLB_DROP_TH_DWLEN (1) +union dstore_glb_drop_th_u { + struct dstore_glb_drop_th { + u32 disc_th:10; /* [9:0] Default:985 RW */ + u32 rsv:21; /* [30:10] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DSTORE_GLB_DROP_TH_DWLEN]; +} __packed; + +#define NBL_DSTORE_PORT_FC_TH_ADDR (0x704134) +#define NBL_DSTORE_PORT_FC_TH_DEPTH (6) +#define NBL_DSTORE_PORT_FC_TH_WIDTH (32) +#define NBL_DSTORE_PORT_FC_TH_DWLEN (1) +union dstore_port_fc_th_u { + struct dstore_port_fc_th { + u32 xoff_th:10; /* [9:0] Default:400 RW */ + u32 rsv1:6; /* [15:10] Default:0x0 RO */ + u32 xon_th:10; /* [25:16] Default:400 RW */ + u32 rsv:4; /* [29:26] Default:0x0 RO */ + u32 fc_set:1; /* [30:30] Default:0x0 RW */ + u32 fc_en:1; /* [31:31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DSTORE_PORT_FC_TH_DWLEN]; +} __packed; +#define NBL_DSTORE_PORT_FC_TH_REG(r) (NBL_DSTORE_PORT_FC_TH_ADDR + \ + (NBL_DSTORE_PORT_FC_TH_DWLEN * 4) * (r)) + +#define NBL_DSTORE_PORT_DROP_TH_ADDR (0x704150) +#define NBL_DSTORE_PORT_DROP_TH_DEPTH (6) +#define NBL_DSTORE_PORT_DROP_TH_WIDTH (32) +#define NBL_DSTORE_PORT_DROP_TH_DWLEN (1) +union dstore_port_drop_th_u { + struct dstore_port_drop_th { + u32 disc_th:10; /* [9:0] Default:800 RW */ + u32 rsv:21; /* [30:10] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x1 RW */ + } __packed info; + u32 data[NBL_DSTORE_PORT_DROP_TH_DWLEN]; +} __packed; +#define NBL_DSTORE_PORT_DROP_TH_REG(r) (NBL_DSTORE_PORT_DROP_TH_ADDR + \ + (NBL_DSTORE_PORT_DROP_TH_DWLEN * 4) * (r)) + +#define NBL_DSTORE_CFG_TEST_ADDR (0x704170) +#define NBL_DSTORE_CFG_TEST_DEPTH (1) +#define NBL_DSTORE_CFG_TEST_WIDTH (32) +#define NBL_DSTORE_CFG_TEST_DWLEN (1) +union dstore_cfg_test_u { + struct dstore_cfg_test { + u32 test:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DSTORE_CFG_TEST_DWLEN]; +} __packed; + +#define NBL_DSTORE_HIGH_PRI_PKT_ADDR (0x70417c) +#define NBL_DSTORE_HIGH_PRI_PKT_DEPTH (1) +#define NBL_DSTORE_HIGH_PRI_PKT_WIDTH (32) +#define NBL_DSTORE_HIGH_PRI_PKT_DWLEN (1) +union dstore_high_pri_pkt_u { + struct dstore_high_pri_pkt { + u32 en:1; /* [0:0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_HIGH_PRI_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_COS_FC_TH_ADDR (0x704200) +#define NBL_DSTORE_COS_FC_TH_DEPTH (48) +#define NBL_DSTORE_COS_FC_TH_WIDTH (32) +#define NBL_DSTORE_COS_FC_TH_DWLEN (1) +union dstore_cos_fc_th_u { + struct dstore_cos_fc_th { + u32 xoff_th:10; /* [9:0] Default:100 RW */ + u32 rsv1:6; /* [15:10] Default:0x0 RO */ + u32 xon_th:10; /* [25:16] Default:100 RW */ + u32 rsv:4; /* [29:26] Default:0x0 RO */ + u32 fc_set:1; /* [30:30] Default:0x0 RW */ + u32 fc_en:1; /* [31:31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DSTORE_COS_FC_TH_DWLEN]; +} __packed; +#define NBL_DSTORE_COS_FC_TH_REG(r) (NBL_DSTORE_COS_FC_TH_ADDR + \ + (NBL_DSTORE_COS_FC_TH_DWLEN * 4) * (r)) + +#define NBL_DSTORE_COS_DROP_TH_ADDR (0x704300) +#define NBL_DSTORE_COS_DROP_TH_DEPTH (48) +#define NBL_DSTORE_COS_DROP_TH_WIDTH (32) +#define NBL_DSTORE_COS_DROP_TH_DWLEN (1) +union dstore_cos_drop_th_u { + struct dstore_cos_drop_th { + u32 disc_th:10; /* [9:0] Default:120 RW */ + u32 rsv:21; /* [30:10] Default:0x0 RO */ + u32 en:1; /* [31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DSTORE_COS_DROP_TH_DWLEN]; +} __packed; +#define NBL_DSTORE_COS_DROP_TH_REG(r) (NBL_DSTORE_COS_DROP_TH_ADDR + \ + (NBL_DSTORE_COS_DROP_TH_DWLEN * 4) * (r)) + +#define NBL_DSTORE_SCH_PD_WRR_WGT_ADDR (0x704400) +#define NBL_DSTORE_SCH_PD_WRR_WGT_DEPTH (36) +#define NBL_DSTORE_SCH_PD_WRR_WGT_WIDTH (32) +#define NBL_DSTORE_SCH_PD_WRR_WGT_DWLEN (1) +union dstore_sch_pd_wrr_wgt_u { + struct dstore_sch_pd_wrr_wgt { + u32 wgt_cos:4; /* [3:0] Default:0x0 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_SCH_PD_WRR_WGT_DWLEN]; +} __packed; +#define NBL_DSTORE_SCH_PD_WRR_WGT_REG(r) (NBL_DSTORE_SCH_PD_WRR_WGT_ADDR + \ + (NBL_DSTORE_SCH_PD_WRR_WGT_DWLEN * 4) * (r)) + +#define NBL_DSTORE_COS7_FORCE_ADDR (0x704504) +#define NBL_DSTORE_COS7_FORCE_DEPTH (1) +#define NBL_DSTORE_COS7_FORCE_WIDTH (32) +#define NBL_DSTORE_COS7_FORCE_DWLEN (1) +union dstore_cos7_force_u { + struct dstore_cos7_force { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_COS7_FORCE_DWLEN]; +} __packed; + +#define NBL_DSTORE_D_DPORT_FC_TH_ADDR (0x704600) +#define NBL_DSTORE_D_DPORT_FC_TH_DEPTH (5) +#define NBL_DSTORE_D_DPORT_FC_TH_WIDTH (32) +#define NBL_DSTORE_D_DPORT_FC_TH_DWLEN (1) +union dstore_d_dport_fc_th_u { + struct dstore_d_dport_fc_th { + u32 xoff_th:11; /* [10:0] Default:200 RW */ + u32 rsv1:5; /* [15:11] Default:0x0 RO */ + u32 xon_th:11; /* [26:16] Default:100 RW */ + u32 rsv:3; /* [29:27] Default:0x0 RO */ + u32 fc_set:1; /* [30:30] Default:0x0 RW */ + u32 fc_en:1; /* [31:31] Default:0x0 RW */ + } __packed info; + u32 data[NBL_DSTORE_D_DPORT_FC_TH_DWLEN]; +} __packed; +#define NBL_DSTORE_D_DPORT_FC_TH_REG(r) (NBL_DSTORE_D_DPORT_FC_TH_ADDR + \ + (NBL_DSTORE_D_DPORT_FC_TH_DWLEN * 4) * (r)) + +#define NBL_DSTORE_INIT_DONE_ADDR (0x704800) +#define NBL_DSTORE_INIT_DONE_DEPTH (1) +#define NBL_DSTORE_INIT_DONE_WIDTH (32) +#define NBL_DSTORE_INIT_DONE_DWLEN (1) +union dstore_init_done_u { + struct dstore_init_done { + u32 done:1; /* [0] Default:0x0 RO */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_INIT_DONE_DWLEN]; +} __packed; + +#define NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_ADDR (0x70481c) +#define NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_DEPTH (1) +#define NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_WIDTH (32) +#define NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_DWLEN (1) +union dstore_sch_idle_list_status_curr_u { + struct dstore_sch_idle_list_status_curr { + u32 empt:1; /* [0] Default:0x0 RO */ + u32 full:1; /* [1] Default:0x1 RO */ + u32 cnt:10; /* [11:2] Default:0x200 RO */ + u32 rsv:20; /* [31:12] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_SCH_IDLE_LIST_STATUS_CURR_DWLEN]; +} __packed; + +#define NBL_DSTORE_SCH_QUE_LIST_STATUS_ADDR (0x704820) +#define NBL_DSTORE_SCH_QUE_LIST_STATUS_DEPTH (48) +#define NBL_DSTORE_SCH_QUE_LIST_STATUS_WIDTH (32) +#define NBL_DSTORE_SCH_QUE_LIST_STATUS_DWLEN (1) +union dstore_sch_que_list_status_u { + struct dstore_sch_que_list_status { + u32 curr_empt:1; /* [0] Default:0x1 RO */ + u32 curr_cnt:10; /* [10:1] Default:0x0 RO */ + u32 history_udf:1; /* [11] Default:0x0 RC */ + u32 rsv:20; /* [31:12] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_SCH_QUE_LIST_STATUS_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_TOTAL_PKT_ADDR (0x705050) +#define NBL_DSTORE_RCV_TOTAL_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_TOTAL_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_TOTAL_PKT_DWLEN (1) +union dstore_rcv_total_pkt_u { + struct dstore_rcv_total_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_TOTAL_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_TOTAL_BYTE_ADDR (0x705054) +#define NBL_DSTORE_RCV_TOTAL_BYTE_DEPTH (1) +#define NBL_DSTORE_RCV_TOTAL_BYTE_WIDTH (48) +#define NBL_DSTORE_RCV_TOTAL_BYTE_DWLEN (2) +union dstore_rcv_total_byte_u { + struct dstore_rcv_total_byte { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_TOTAL_BYTE_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_ADDR (0x70505c) +#define NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_DWLEN (1) +union dstore_rcv_total_right_pkt_u { + struct dstore_rcv_total_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_TOTAL_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_TOTAL_WRONG_PKT_ADDR (0x705060) +#define NBL_DSTORE_RCV_TOTAL_WRONG_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_TOTAL_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_TOTAL_WRONG_PKT_DWLEN (1) +union dstore_rcv_total_wrong_pkt_u { + struct dstore_rcv_total_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_TOTAL_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_FWD_RIGHT_PKT_ADDR (0x705064) +#define NBL_DSTORE_RCV_FWD_RIGHT_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_FWD_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_FWD_RIGHT_PKT_DWLEN (1) +union dstore_rcv_fwd_right_pkt_u { + struct dstore_rcv_fwd_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_FWD_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_FWD_WRONG_PKT_ADDR (0x705068) +#define NBL_DSTORE_RCV_FWD_WRONG_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_FWD_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_FWD_WRONG_PKT_DWLEN (1) +union dstore_rcv_fwd_wrong_pkt_u { + struct dstore_rcv_fwd_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_FWD_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_HERR_RIGHT_PKT_ADDR (0x70506c) +#define NBL_DSTORE_RCV_HERR_RIGHT_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_HERR_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_HERR_RIGHT_PKT_DWLEN (1) +union dstore_rcv_herr_right_pkt_u { + struct dstore_rcv_herr_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_HERR_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_HERR_WRONG_PKT_ADDR (0x705070) +#define NBL_DSTORE_RCV_HERR_WRONG_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_HERR_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_HERR_WRONG_PKT_DWLEN (1) +union dstore_rcv_herr_wrong_pkt_u { + struct dstore_rcv_herr_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_HERR_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_IPRO_TOTAL_PKT_ADDR (0x705074) +#define NBL_DSTORE_IPRO_TOTAL_PKT_DEPTH (1) +#define NBL_DSTORE_IPRO_TOTAL_PKT_WIDTH (32) +#define NBL_DSTORE_IPRO_TOTAL_PKT_DWLEN (1) +union dstore_ipro_total_pkt_u { + struct dstore_ipro_total_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_TOTAL_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_IPRO_TOTAL_BYTE_ADDR (0x705078) +#define NBL_DSTORE_IPRO_TOTAL_BYTE_DEPTH (1) +#define NBL_DSTORE_IPRO_TOTAL_BYTE_WIDTH (48) +#define NBL_DSTORE_IPRO_TOTAL_BYTE_DWLEN (2) +union dstore_ipro_total_byte_u { + struct dstore_ipro_total_byte { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_TOTAL_BYTE_DWLEN]; +} __packed; + +#define NBL_DSTORE_IPRO_FWD_RIGHT_PKT_ADDR (0x705080) +#define NBL_DSTORE_IPRO_FWD_RIGHT_PKT_DEPTH (1) +#define NBL_DSTORE_IPRO_FWD_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_IPRO_FWD_RIGHT_PKT_DWLEN (1) +union dstore_ipro_fwd_right_pkt_u { + struct dstore_ipro_fwd_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_FWD_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_IPRO_FWD_WRONG_PKT_ADDR (0x705084) +#define NBL_DSTORE_IPRO_FWD_WRONG_PKT_DEPTH (1) +#define NBL_DSTORE_IPRO_FWD_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_IPRO_FWD_WRONG_PKT_DWLEN (1) +union dstore_ipro_fwd_wrong_pkt_u { + struct dstore_ipro_fwd_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_FWD_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_IPRO_HERR_RIGHT_PKT_ADDR (0x705088) +#define NBL_DSTORE_IPRO_HERR_RIGHT_PKT_DEPTH (1) +#define NBL_DSTORE_IPRO_HERR_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_IPRO_HERR_RIGHT_PKT_DWLEN (1) +union dstore_ipro_herr_right_pkt_u { + struct dstore_ipro_herr_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_HERR_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_IPRO_HERR_WRONG_PKT_ADDR (0x70508c) +#define NBL_DSTORE_IPRO_HERR_WRONG_PKT_DEPTH (1) +#define NBL_DSTORE_IPRO_HERR_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_IPRO_HERR_WRONG_PKT_DWLEN (1) +union dstore_ipro_herr_wrong_pkt_u { + struct dstore_ipro_herr_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_HERR_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_PMEM_TOTAL_PKT_ADDR (0x705090) +#define NBL_DSTORE_PMEM_TOTAL_PKT_DEPTH (1) +#define NBL_DSTORE_PMEM_TOTAL_PKT_WIDTH (32) +#define NBL_DSTORE_PMEM_TOTAL_PKT_DWLEN (1) +union dstore_pmem_total_pkt_u { + struct dstore_pmem_total_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_PMEM_TOTAL_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_PMEM_TOTAL_BYTE_ADDR (0x705094) +#define NBL_DSTORE_PMEM_TOTAL_BYTE_DEPTH (1) +#define NBL_DSTORE_PMEM_TOTAL_BYTE_WIDTH (48) +#define NBL_DSTORE_PMEM_TOTAL_BYTE_DWLEN (2) +union dstore_pmem_total_byte_u { + struct dstore_pmem_total_byte { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_PMEM_TOTAL_BYTE_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_ADDR (0x70509c) +#define NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_DWLEN (1) +union dstore_rcv_total_err_drop_pkt_u { + struct dstore_rcv_total_err_drop_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_TOTAL_ERR_DROP_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_TOTAL_SHORT_PKT_ADDR (0x7050a0) +#define NBL_DSTORE_RCV_TOTAL_SHORT_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_TOTAL_SHORT_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_TOTAL_SHORT_PKT_DWLEN (1) +union dstore_rcv_total_short_pkt_u { + struct dstore_rcv_total_short_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_TOTAL_SHORT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_TOTAL_LONG_PKT_ADDR (0x7050a4) +#define NBL_DSTORE_RCV_TOTAL_LONG_PKT_DEPTH (1) +#define NBL_DSTORE_RCV_TOTAL_LONG_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_TOTAL_LONG_PKT_DWLEN (1) +union dstore_rcv_total_long_pkt_u { + struct dstore_rcv_total_long_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_TOTAL_LONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_BUF_TOTAL_DROP_PKT_ADDR (0x7050a8) +#define NBL_DSTORE_BUF_TOTAL_DROP_PKT_DEPTH (1) +#define NBL_DSTORE_BUF_TOTAL_DROP_PKT_WIDTH (32) +#define NBL_DSTORE_BUF_TOTAL_DROP_PKT_DWLEN (1) +union dstore_buf_total_drop_pkt_u { + struct dstore_buf_total_drop_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_BUF_TOTAL_DROP_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_BUF_TOTAL_TRUN_PKT_ADDR (0x7050ac) +#define NBL_DSTORE_BUF_TOTAL_TRUN_PKT_DEPTH (1) +#define NBL_DSTORE_BUF_TOTAL_TRUN_PKT_WIDTH (32) +#define NBL_DSTORE_BUF_TOTAL_TRUN_PKT_DWLEN (1) +union dstore_buf_total_trun_pkt_u { + struct dstore_buf_total_trun_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_BUF_TOTAL_TRUN_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_PORT_PKT_ADDR (0x706000) +#define NBL_DSTORE_RCV_PORT_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_PKT_DWLEN (1) +union dstore_rcv_port_pkt_u { + struct dstore_rcv_port_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_PKT_DWLEN]; +} __packed; +#define NBL_DSTORE_RCV_PORT_PKT_REG(r) (NBL_DSTORE_RCV_PORT_PKT_ADDR + \ + (NBL_DSTORE_RCV_PORT_PKT_DWLEN * 4) * (r)) + +#define NBL_DSTORE_RCV_PORT_BYTE_ADDR (0x706040) +#define NBL_DSTORE_RCV_PORT_BYTE_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_BYTE_WIDTH (48) +#define NBL_DSTORE_RCV_PORT_BYTE_DWLEN (2) +union dstore_rcv_port_byte_u { + struct dstore_rcv_port_byte { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_BYTE_DWLEN]; +} __packed; +#define NBL_DSTORE_RCV_PORT_BYTE_REG(r) (NBL_DSTORE_RCV_PORT_BYTE_ADDR + \ + (NBL_DSTORE_RCV_PORT_BYTE_DWLEN * 4) * (r)) + +#define NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_ADDR (0x7060c0) +#define NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_DWLEN (1) +union dstore_rcv_port_total_right_pkt_u { + struct dstore_rcv_port_total_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_TOTAL_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_ADDR (0x706100) +#define NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_DWLEN (1) +union dstore_rcv_port_total_wrong_pkt_u { + struct dstore_rcv_port_total_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_TOTAL_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_ADDR (0x706140) +#define NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_DWLEN (1) +union dstore_rcv_port_fwd_right_pkt_u { + struct dstore_rcv_port_fwd_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_FWD_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_ADDR (0x706180) +#define NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_DWLEN (1) +union dstore_rcv_port_fwd_wrong_pkt_u { + struct dstore_rcv_port_fwd_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_FWD_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_ADDR (0x7061c0) +#define NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_DWLEN (1) +union dstore_rcv_port_herr_right_pkt_u { + struct dstore_rcv_port_herr_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_HERR_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_ADDR (0x706200) +#define NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_DWLEN (1) +union dstore_rcv_port_herr_wrong_pkt_u { + struct dstore_rcv_port_herr_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_HERR_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_IPRO_PORT_PKT_ADDR (0x706240) +#define NBL_DSTORE_IPRO_PORT_PKT_DEPTH (12) +#define NBL_DSTORE_IPRO_PORT_PKT_WIDTH (32) +#define NBL_DSTORE_IPRO_PORT_PKT_DWLEN (1) +union dstore_ipro_port_pkt_u { + struct dstore_ipro_port_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_PORT_PKT_DWLEN]; +} __packed; +#define NBL_DSTORE_IPRO_PORT_PKT_REG(r) (NBL_DSTORE_IPRO_PORT_PKT_ADDR + \ + (NBL_DSTORE_IPRO_PORT_PKT_DWLEN * 4) * (r)) + +#define NBL_DSTORE_IPRO_PORT_BYTE_ADDR (0x706280) +#define NBL_DSTORE_IPRO_PORT_BYTE_DEPTH (12) +#define NBL_DSTORE_IPRO_PORT_BYTE_WIDTH (48) +#define NBL_DSTORE_IPRO_PORT_BYTE_DWLEN (2) +union dstore_ipro_port_byte_u { + struct dstore_ipro_port_byte { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_PORT_BYTE_DWLEN]; +} __packed; +#define NBL_DSTORE_IPRO_PORT_BYTE_REG(r) (NBL_DSTORE_IPRO_PORT_BYTE_ADDR + \ + (NBL_DSTORE_IPRO_PORT_BYTE_DWLEN * 4) * (r)) + +#define NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_ADDR (0x706300) +#define NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_DEPTH (12) +#define NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_WIDTH (32) +#define NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_DWLEN (1) +union dstore_ipro_port_fwd_right_pkt_u { + struct dstore_ipro_port_fwd_right_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_PORT_FWD_RIGHT_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_ADDR (0x706340) +#define NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_DEPTH (12) +#define NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_WIDTH (32) +#define NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_DWLEN (1) +union dstore_ipro_port_fwd_wrong_pkt_u { + struct dstore_ipro_port_fwd_wrong_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_IPRO_PORT_FWD_WRONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_PMEM_PORT_PKT_ADDR (0x706380) +#define NBL_DSTORE_PMEM_PORT_PKT_DEPTH (12) +#define NBL_DSTORE_PMEM_PORT_PKT_WIDTH (32) +#define NBL_DSTORE_PMEM_PORT_PKT_DWLEN (1) +union dstore_pmem_port_pkt_u { + struct dstore_pmem_port_pkt { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_PMEM_PORT_PKT_DWLEN]; +} __packed; +#define NBL_DSTORE_PMEM_PORT_PKT_REG(r) (NBL_DSTORE_PMEM_PORT_PKT_ADDR + \ + (NBL_DSTORE_PMEM_PORT_PKT_DWLEN * 4) * (r)) + +#define NBL_DSTORE_PMEM_PORT_BYTE_ADDR (0x7063c0) +#define NBL_DSTORE_PMEM_PORT_BYTE_DEPTH (12) +#define NBL_DSTORE_PMEM_PORT_BYTE_WIDTH (48) +#define NBL_DSTORE_PMEM_PORT_BYTE_DWLEN (2) +union dstore_pmem_port_byte_u { + struct dstore_pmem_port_byte { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_DSTORE_PMEM_PORT_BYTE_DWLEN]; +} __packed; +#define NBL_DSTORE_PMEM_PORT_BYTE_REG(r) (NBL_DSTORE_PMEM_PORT_BYTE_ADDR + \ + (NBL_DSTORE_PMEM_PORT_BYTE_DWLEN * 4) * (r)) + +#define NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_ADDR (0x706440) +#define NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_DWLEN (1) +union dstore_rcv_err_port_drop_pkt_u { + struct dstore_rcv_err_port_drop_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_ERR_PORT_DROP_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_ADDR (0x706480) +#define NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_DWLEN (1) +union dstore_rcv_port_short_drop_pkt_u { + struct dstore_rcv_port_short_drop_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_SHORT_DROP_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_RCV_PORT_LONG_PKT_ADDR (0x7064c0) +#define NBL_DSTORE_RCV_PORT_LONG_PKT_DEPTH (12) +#define NBL_DSTORE_RCV_PORT_LONG_PKT_WIDTH (32) +#define NBL_DSTORE_RCV_PORT_LONG_PKT_DWLEN (1) +union dstore_rcv_port_long_pkt_u { + struct dstore_rcv_port_long_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_RCV_PORT_LONG_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_BUF_PORT_DROP_PKT_ADDR (0x706500) +#define NBL_DSTORE_BUF_PORT_DROP_PKT_DEPTH (12) +#define NBL_DSTORE_BUF_PORT_DROP_PKT_WIDTH (32) +#define NBL_DSTORE_BUF_PORT_DROP_PKT_DWLEN (1) +union dstore_buf_port_drop_pkt_u { + struct dstore_buf_port_drop_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_BUF_PORT_DROP_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_BUF_PORT_TRUN_PKT_ADDR (0x706540) +#define NBL_DSTORE_BUF_PORT_TRUN_PKT_DEPTH (12) +#define NBL_DSTORE_BUF_PORT_TRUN_PKT_WIDTH (32) +#define NBL_DSTORE_BUF_PORT_TRUN_PKT_DWLEN (1) +union dstore_buf_port_trun_pkt_u { + struct dstore_buf_port_trun_pkt { + u32 cnt:32; /* [31:0] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_DSTORE_BUF_PORT_TRUN_PKT_DWLEN]; +} __packed; + +#define NBL_DSTORE_BP_CUR_1ST_ADDR (0x706580) +#define NBL_DSTORE_BP_CUR_1ST_DEPTH (1) +#define NBL_DSTORE_BP_CUR_1ST_WIDTH (32) +#define NBL_DSTORE_BP_CUR_1ST_DWLEN (1) +union dstore_bp_cur_1st_u { + struct dstore_bp_cur_1st { + u32 link_fc:6; /* [5:0] Default:0x0 RO */ + u32 rsv:2; /* [7:6] Default:0x0 RO */ + u32 pfc:24; /* [31:8] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_BP_CUR_1ST_DWLEN]; +} __packed; + +#define NBL_DSTORE_BP_CUR_2ND_ADDR (0x706584) +#define NBL_DSTORE_BP_CUR_2ND_DEPTH (1) +#define NBL_DSTORE_BP_CUR_2ND_WIDTH (32) +#define NBL_DSTORE_BP_CUR_2ND_DWLEN (1) +union dstore_bp_cur_2nd_u { + struct dstore_bp_cur_2nd { + u32 pfc:24; /* [23:0] Default:0x0 RO */ + u32 rsv:8; /* [31:24] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_BP_CUR_2ND_DWLEN]; +} __packed; + +#define NBL_DSTORE_BP_HISTORY_LINK_ADDR (0x706590) +#define NBL_DSTORE_BP_HISTORY_LINK_DEPTH (6) +#define NBL_DSTORE_BP_HISTORY_LINK_WIDTH (32) +#define NBL_DSTORE_BP_HISTORY_LINK_DWLEN (1) +union dstore_bp_history_link_u { + struct dstore_bp_history_link { + u32 fc:1; /* [0] Default:0x0 RC */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_BP_HISTORY_LINK_DWLEN]; +} __packed; +#define NBL_DSTORE_BP_HISTORY_LINK_REG(r) (NBL_DSTORE_BP_HISTORY_LINK_ADDR + \ + (NBL_DSTORE_BP_HISTORY_LINK_DWLEN * 4) * (r)) + +#define NBL_DSTORE_BP_HISTORY_ADDR (0x7065b0) +#define NBL_DSTORE_BP_HISTORY_DEPTH (48) +#define NBL_DSTORE_BP_HISTORY_WIDTH (32) +#define NBL_DSTORE_BP_HISTORY_DWLEN (1) +union dstore_bp_history_u { + struct dstore_bp_history { + u32 pfc:1; /* [0] Default:0x0 RC */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_BP_HISTORY_DWLEN]; +} __packed; +#define NBL_DSTORE_BP_HISTORY_REG(r) (NBL_DSTORE_BP_HISTORY_ADDR + \ + (NBL_DSTORE_BP_HISTORY_DWLEN * 4) * (r)) + +#define NBL_DSTORE_WRR_CUR_ADDR (0x706800) +#define NBL_DSTORE_WRR_CUR_DEPTH (36) +#define NBL_DSTORE_WRR_CUR_WIDTH (32) +#define NBL_DSTORE_WRR_CUR_DWLEN (1) +union dstore_wrr_cur_u { + struct dstore_wrr_cur { + u32 wgt_cos:5; /* [4:0] Default:0x0 RO */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_WRR_CUR_DWLEN]; +} __packed; +#define NBL_DSTORE_WRR_CUR_REG(r) (NBL_DSTORE_WRR_CUR_ADDR + \ + (NBL_DSTORE_WRR_CUR_DWLEN * 4) * (r)) + +#define NBL_DSTORE_DDPORT_CUR_ADDR (0x707018) +#define NBL_DSTORE_DDPORT_CUR_DEPTH (1) +#define NBL_DSTORE_DDPORT_CUR_WIDTH (32) +#define NBL_DSTORE_DDPORT_CUR_DWLEN (1) +union dstore_ddport_cur_u { + struct dstore_ddport_cur { + u32 link_fc:5; /* [4:0] Default:0x0 RO */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_DDPORT_CUR_DWLEN]; +} __packed; + +#define NBL_DSTORE_DDPORT_HISTORY_ADDR (0x70701c) +#define NBL_DSTORE_DDPORT_HISTORY_DEPTH (5) +#define NBL_DSTORE_DDPORT_HISTORY_WIDTH (32) +#define NBL_DSTORE_DDPORT_HISTORY_DWLEN (1) +union dstore_ddport_history_u { + struct dstore_ddport_history { + u32 link_fc:1; /* [0] Default:0x0 RC */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_DDPORT_HISTORY_DWLEN]; +} __packed; +#define NBL_DSTORE_DDPORT_HISTORY_REG(r) (NBL_DSTORE_DDPORT_HISTORY_ADDR + \ + (NBL_DSTORE_DDPORT_HISTORY_DWLEN * 4) * (r)) + +#define NBL_DSTORE_DDPORT_RSC_ADD_ADDR (0x707050) +#define NBL_DSTORE_DDPORT_RSC_ADD_DEPTH (5) +#define NBL_DSTORE_DDPORT_RSC_ADD_WIDTH (32) +#define NBL_DSTORE_DDPORT_RSC_ADD_DWLEN (1) +union dstore_ddport_rsc_add_u { + struct dstore_ddport_rsc_add { + u32 cnt:12; /* [11:0] Default:0x0 RO */ + u32 rsv:20; /* [31:12] Default:0x0 RO */ + } __packed info; + u32 data[NBL_DSTORE_DDPORT_RSC_ADD_DWLEN]; +} __packed; +#define NBL_DSTORE_DDPORT_RSC_ADD_REG(r) (NBL_DSTORE_DDPORT_RSC_ADD_ADDR + \ + (NBL_DSTORE_DDPORT_RSC_ADD_DWLEN * 4) * (r)) + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h new file mode 100644 index 000000000000..3504c272c4d4 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_datapath_ucar.h @@ -0,0 +1,414 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +// Code generated by interstellar. DO NOT EDIT. +// Compatible with leonis RTL tag 0710 + +#ifndef NBL_UCAR_H +#define NBL_UCAR_H 1 + +#include <linux/types.h> + +#define NBL_UCAR_BASE (0x00E84000) + +#define NBL_UCAR_INT_STATUS_ADDR (0xe84000) +#define NBL_UCAR_INT_STATUS_DEPTH (1) +#define NBL_UCAR_INT_STATUS_WIDTH (32) +#define NBL_UCAR_INT_STATUS_DWLEN (1) +union ucar_int_status_u { + struct ucar_int_status { + u32 color_err:1; /* [0] Default:0x0 RWC */ + u32 parity_err:1; /* [1] Default:0x0 RWC */ + u32 fifo_uflw_err:1; /* [2] Default:0x0 RWC */ + u32 cif_err:1; /* [3] Default:0x0 RWC */ + u32 fifo_dflw_err:1; /* [4] Default:0x0 RWC */ + u32 atid_nomat_err:1; /* [5] Default:0x0 RWC */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_INT_STATUS_DWLEN]; +} __packed; + +#define NBL_UCAR_INT_MASK_ADDR (0xe84004) +#define NBL_UCAR_INT_MASK_DEPTH (1) +#define NBL_UCAR_INT_MASK_WIDTH (32) +#define NBL_UCAR_INT_MASK_DWLEN (1) +union ucar_int_mask_u { + struct ucar_int_mask { + u32 color_err:1; /* [0] Default:0x1 RW */ + u32 parity_err:1; /* [1] Default:0x0 RW */ + u32 fifo_uflw_err:1; /* [2] Default:0x0 RW */ + u32 cif_err:1; /* [3] Default:0x0 RW */ + u32 fifo_dflw_err:1; /* [4] Default:0x0 RW */ + u32 atid_nomat_err:1; /* [5] Default:0x1 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_INT_MASK_DWLEN]; +} __packed; + +#define NBL_UCAR_INT_SET_ADDR (0xe84008) +#define NBL_UCAR_INT_SET_DEPTH (1) +#define NBL_UCAR_INT_SET_WIDTH (32) +#define NBL_UCAR_INT_SET_DWLEN (1) +union ucar_int_set_u { + struct ucar_int_set { + u32 color_err:1; /* [0] Default:0x0 WO */ + u32 parity_err:1; /* [1] Default:0x0 WO */ + u32 fifo_uflw_err:1; /* [2] Default:0x0 WO */ + u32 cif_err:1; /* [3] Default:0x0 WO */ + u32 fifo_dflw_err:1; /* [4] Default:0x0 WO */ + u32 atid_nomat_err:1; /* [5] Default:0x0 WO */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_INT_SET_DWLEN]; +} __packed; + +#define NBL_UCAR_PARITY_ERR_INFO_ADDR (0xe84104) +#define NBL_UCAR_PARITY_ERR_INFO_DEPTH (1) +#define NBL_UCAR_PARITY_ERR_INFO_WIDTH (32) +#define NBL_UCAR_PARITY_ERR_INFO_DWLEN (1) +union ucar_parity_err_info_u { + struct ucar_parity_err_info { + u32 ram_addr:12; /* [11:0] Default:0x0 RO */ + u32 ram_id:3; /* [14:12] Default:0x0 RO */ + u32 rsv:17; /* [31:15] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_PARITY_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_UCAR_CIF_ERR_INFO_ADDR (0xe8411c) +#define NBL_UCAR_CIF_ERR_INFO_DEPTH (1) +#define NBL_UCAR_CIF_ERR_INFO_WIDTH (32) +#define NBL_UCAR_CIF_ERR_INFO_DWLEN (1) +union ucar_cif_err_info_u { + struct ucar_cif_err_info { + u32 addr:30; /* [29:0] Default:0x0 RO */ + u32 wr_err:1; /* [30] Default:0x0 RO */ + u32 ucor_err:1; /* [31] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_CIF_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_UCAR_ATID_NOMAT_ERR_INFO_ADDR (0xe84134) +#define NBL_UCAR_ATID_NOMAT_ERR_INFO_DEPTH (1) +#define NBL_UCAR_ATID_NOMAT_ERR_INFO_WIDTH (32) +#define NBL_UCAR_ATID_NOMAT_ERR_INFO_DWLEN (1) +union ucar_atid_nomat_err_info_u { + struct ucar_atid_nomat_err_info { + u32 id:2; /* [1:0] Default:0x0 RO */ + u32 rsv:30; /* [31:2] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_ATID_NOMAT_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_UCAR_CAR_CTRL_ADDR (0xe84200) +#define NBL_UCAR_CAR_CTRL_DEPTH (1) +#define NBL_UCAR_CAR_CTRL_WIDTH (32) +#define NBL_UCAR_CAR_CTRL_DWLEN (1) +union ucar_car_ctrl_u { + struct ucar_car_ctrl { + u32 sctr_car:1; /* [0] Default:0x1 RW */ + u32 rctr_car:1; /* [1] Default:0x1 RW */ + u32 rc_car:1; /* [2] Default:0x1 RW */ + u32 tbl_rc_car:1; /* [3] Default:0x1 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_CAR_CTRL_DWLEN]; +} __packed; + +#define NBL_UCAR_INIT_START_ADDR (0xe84204) +#define NBL_UCAR_INIT_START_DEPTH (1) +#define NBL_UCAR_INIT_START_WIDTH (32) +#define NBL_UCAR_INIT_START_DWLEN (1) +union ucar_init_start_u { + struct ucar_init_start { + u32 start:1; /* [0] Default:0x0 WO */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_INIT_START_DWLEN]; +} __packed; + +#define NBL_UCAR_FWD_CARID_ADDR (0xe84210) +#define NBL_UCAR_FWD_CARID_DEPTH (1) +#define NBL_UCAR_FWD_CARID_WIDTH (32) +#define NBL_UCAR_FWD_CARID_DWLEN (1) +union ucar_fwd_carid_u { + struct ucar_fwd_carid { + u32 act_id:6; /* [5:0] Default:0x5 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_FWD_CARID_DWLEN]; +} __packed; + +#define NBL_UCAR_FWD_FLOW_CAR_ADDR (0xe84214) +#define NBL_UCAR_FWD_FLOW_CAR_DEPTH (1) +#define NBL_UCAR_FWD_FLOW_CAR_WIDTH (32) +#define NBL_UCAR_FWD_FLOW_CAR_DWLEN (1) +union ucar_fwd_flow_car_u { + struct ucar_fwd_flow_car { + u32 act_id:6; /* [5:0] Default:0x6 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_FWD_FLOW_CAR_DWLEN]; +} __packed; + +#define NBL_UCAR_PBS_SUB_ADDR (0xe84224) +#define NBL_UCAR_PBS_SUB_DEPTH (1) +#define NBL_UCAR_PBS_SUB_WIDTH (32) +#define NBL_UCAR_PBS_SUB_DWLEN (1) +union ucar_pbs_sub_u { + struct ucar_pbs_sub { + u32 sel:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_PBS_SUB_DWLEN]; +} __packed; + +#define NBL_UCAR_FLOW_TIMMING_ADD_ADDR (0xe84400) +#define NBL_UCAR_FLOW_TIMMING_ADD_DEPTH (1) +#define NBL_UCAR_FLOW_TIMMING_ADD_WIDTH (32) +#define NBL_UCAR_FLOW_TIMMING_ADD_DWLEN (1) +union ucar_flow_timming_add_u { + struct ucar_flow_timming_add { + u32 cycle_max:12; /* [11:0] Default:0x4 RW */ + u32 rsv1:4; /* [15:12] Default:0x0 RO */ + u32 depth:14; /* [29:16] Default:0x4B0 RW */ + u32 rsv:2; /* [31:30] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_FLOW_TIMMING_ADD_DWLEN]; +} __packed; + +#define NBL_UCAR_FLOW_4K_TIMMING_ADD_ADDR (0xe84404) +#define NBL_UCAR_FLOW_4K_TIMMING_ADD_DEPTH (1) +#define NBL_UCAR_FLOW_4K_TIMMING_ADD_WIDTH (32) +#define NBL_UCAR_FLOW_4K_TIMMING_ADD_DWLEN (1) +union ucar_flow_4k_timming_add_u { + struct ucar_flow_4k_timming_add { + u32 cycle_max:12; /* [11:0] Default:0x4 RW */ + u32 depth:18; /* [29:12] Default:0x12C0 RW */ + u32 rsv:2; /* [31:30] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_FLOW_4K_TIMMING_ADD_DWLEN]; +} __packed; + +#define NBL_UCAR_INIT_DONE_ADDR (0xe84408) +#define NBL_UCAR_INIT_DONE_DEPTH (1) +#define NBL_UCAR_INIT_DONE_WIDTH (32) +#define NBL_UCAR_INIT_DONE_DWLEN (1) +union ucar_init_done_u { + struct ucar_init_done { + u32 done:1; /* [0] Default:0x0 RO */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_INIT_DONE_DWLEN]; +} __packed; + +#define NBL_UCAR_INPUT_CELL_ADDR (0xe8441c) +#define NBL_UCAR_INPUT_CELL_DEPTH (1) +#define NBL_UCAR_INPUT_CELL_WIDTH (32) +#define NBL_UCAR_INPUT_CELL_DWLEN (1) +union ucar_input_cell_u { + struct ucar_input_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_INPUT_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_RD_CELL_ADDR (0xe84420) +#define NBL_UCAR_RD_CELL_DEPTH (1) +#define NBL_UCAR_RD_CELL_WIDTH (32) +#define NBL_UCAR_RD_CELL_DWLEN (1) +union ucar_rd_cell_u { + struct ucar_rd_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_RD_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_CAR_CELL_ADDR (0xe84424) +#define NBL_UCAR_CAR_CELL_DEPTH (1) +#define NBL_UCAR_CAR_CELL_WIDTH (32) +#define NBL_UCAR_CAR_CELL_DWLEN (1) +union ucar_car_cell_u { + struct ucar_car_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_CAR_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_CAR_FLOW_CELL_ADDR (0xe84428) +#define NBL_UCAR_CAR_FLOW_CELL_DEPTH (1) +#define NBL_UCAR_CAR_FLOW_CELL_WIDTH (32) +#define NBL_UCAR_CAR_FLOW_CELL_DWLEN (1) +union ucar_car_flow_cell_u { + struct ucar_car_flow_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_CAR_FLOW_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_CAR_FLOW_4K_CELL_ADDR (0xe8442c) +#define NBL_UCAR_CAR_FLOW_4K_CELL_DEPTH (1) +#define NBL_UCAR_CAR_FLOW_4K_CELL_WIDTH (32) +#define NBL_UCAR_CAR_FLOW_4K_CELL_DWLEN (1) +union ucar_car_flow_4k_cell_u { + struct ucar_car_flow_4k_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_CAR_FLOW_4K_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_NOCAR_CELL_ADDR (0xe84430) +#define NBL_UCAR_NOCAR_CELL_DEPTH (1) +#define NBL_UCAR_NOCAR_CELL_WIDTH (32) +#define NBL_UCAR_NOCAR_CELL_DWLEN (1) +union ucar_nocar_cell_u { + struct ucar_nocar_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_NOCAR_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_NOCAR_ERR_ADDR (0xe84434) +#define NBL_UCAR_NOCAR_ERR_DEPTH (1) +#define NBL_UCAR_NOCAR_ERR_WIDTH (32) +#define NBL_UCAR_NOCAR_ERR_DWLEN (1) +union ucar_nocar_err_u { + struct ucar_nocar_err { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_NOCAR_ERR_DWLEN]; +} __packed; + +#define NBL_UCAR_GREEN_CELL_ADDR (0xe84438) +#define NBL_UCAR_GREEN_CELL_DEPTH (1) +#define NBL_UCAR_GREEN_CELL_WIDTH (32) +#define NBL_UCAR_GREEN_CELL_DWLEN (1) +union ucar_green_cell_u { + struct ucar_green_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_GREEN_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_YELLOW_CELL_ADDR (0xe8443c) +#define NBL_UCAR_YELLOW_CELL_DEPTH (1) +#define NBL_UCAR_YELLOW_CELL_WIDTH (32) +#define NBL_UCAR_YELLOW_CELL_DWLEN (1) +union ucar_yellow_cell_u { + struct ucar_yellow_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_YELLOW_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_RED_CELL_ADDR (0xe84440) +#define NBL_UCAR_RED_CELL_DEPTH (1) +#define NBL_UCAR_RED_CELL_WIDTH (32) +#define NBL_UCAR_RED_CELL_DWLEN (1) +union ucar_red_cell_u { + struct ucar_red_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_RED_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_NOCAR_PKT_ADDR (0xe84444) +#define NBL_UCAR_NOCAR_PKT_DEPTH (1) +#define NBL_UCAR_NOCAR_PKT_WIDTH (48) +#define NBL_UCAR_NOCAR_PKT_DWLEN (2) +union ucar_nocar_pkt_u { + struct ucar_nocar_pkt { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_NOCAR_PKT_DWLEN]; +} __packed; + +#define NBL_UCAR_GREEN_PKT_ADDR (0xe8444c) +#define NBL_UCAR_GREEN_PKT_DEPTH (1) +#define NBL_UCAR_GREEN_PKT_WIDTH (48) +#define NBL_UCAR_GREEN_PKT_DWLEN (2) +union ucar_green_pkt_u { + struct ucar_green_pkt { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_GREEN_PKT_DWLEN]; +} __packed; + +#define NBL_UCAR_YELLOW_PKT_ADDR (0xe84454) +#define NBL_UCAR_YELLOW_PKT_DEPTH (1) +#define NBL_UCAR_YELLOW_PKT_WIDTH (48) +#define NBL_UCAR_YELLOW_PKT_DWLEN (2) +union ucar_yellow_pkt_u { + struct ucar_yellow_pkt { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_YELLOW_PKT_DWLEN]; +} __packed; + +#define NBL_UCAR_RED_PKT_ADDR (0xe8445c) +#define NBL_UCAR_RED_PKT_DEPTH (1) +#define NBL_UCAR_RED_PKT_WIDTH (48) +#define NBL_UCAR_RED_PKT_DWLEN (2) +union ucar_red_pkt_u { + struct ucar_red_pkt { + u32 cnt_l:32; /* [47:0] Default:0x0 RCTR */ + u32 cnt_h:16; /* [47:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_RED_PKT_DWLEN]; +} __packed; + +#define NBL_UCAR_FWD_TYPE_WRONG_CELL_ADDR (0xe84464) +#define NBL_UCAR_FWD_TYPE_WRONG_CELL_DEPTH (1) +#define NBL_UCAR_FWD_TYPE_WRONG_CELL_WIDTH (32) +#define NBL_UCAR_FWD_TYPE_WRONG_CELL_DWLEN (1) +union ucar_fwd_type_wrong_cell_u { + struct ucar_fwd_type_wrong_cell { + u32 cnt:32; /* [31:0] Default:0x0 RCTR */ + } __packed info; + u32 data[NBL_UCAR_FWD_TYPE_WRONG_CELL_DWLEN]; +} __packed; + +#define NBL_UCAR_FLOW_ADDR (0xe88000) +#define NBL_UCAR_FLOW_DEPTH (1024) +#define NBL_UCAR_FLOW_WIDTH (128) +#define NBL_UCAR_FLOW_DWLEN (4) +union ucar_flow_u { + struct ucar_flow { + u32 valid:1; /* [0] Default:0x0 RW */ + u32 depth:19; /* [19:1] Default:0x0 RW */ + u32 cir:19; /* [38:20] Default:0x0 RW */ + u32 pir:19; /* [57:39] Default:0x0 RW */ + u32 cbs:21; /* [78:58] Default:0x0 RW */ + u32 pbs:21; /* [99:79] Default:0x0 RW */ + u32 rsv:28; /* [127:100] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_FLOW_DWLEN]; +} __packed; +#define NBL_UCAR_FLOW_REG(r) (NBL_UCAR_FLOW_ADDR + \ + (NBL_UCAR_FLOW_DWLEN * 4) * (r)) + +#define NBL_UCAR_FLOW_4K_ADDR (0xe94000) +#define NBL_UCAR_FLOW_4K_DEPTH (4096) +#define NBL_UCAR_FLOW_4K_WIDTH (128) +#define NBL_UCAR_FLOW_4K_DWLEN (4) +union ucar_flow_4k_u { + struct ucar_flow_4k { + u32 valid:1; /* [0] Default:0x0 RW */ + u32 depth:21; /* [21:1] Default:0x0 RW */ + u32 cir:21; /* [42:22] Default:0x0 RW */ + u32 pir:21; /* [63:43] Default:0x0 RW */ + u32 cbs:23; /* [86:64] Default:0x0 RW */ + u32 pbs:23; /* [109:87] Default:0x0 RW */ + u32 rsv:18; /* [127:110] Default:0x0 RO */ + } __packed info; + u32 data[NBL_UCAR_FLOW_4K_DWLEN]; +} __packed; +#define NBL_UCAR_FLOW_4K_REG(r) (NBL_UCAR_FLOW_4K_ADDR + \ + (NBL_UCAR_FLOW_4K_DWLEN * 4) * (r)) + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h new file mode 100644 index 000000000000..47bda61dbf97 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +// Code generated by interstellar. DO NOT EDIT. +// Compatible with leonis RTL tag 0710 + +#include "nbl_ppe_ipro.h" +#include "nbl_ppe_epro.h" diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h new file mode 100644 index 000000000000..7c36f4ad11b4 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_epro.h @@ -0,0 +1,665 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +// Code generated by interstellar. DO NOT EDIT. +// Compatible with leonis RTL tag 0710 + +#ifndef NBL_EPRO_H +#define NBL_EPRO_H 1 + +#include <linux/types.h> + +#define NBL_EPRO_BASE (0x00E74000) + +#define NBL_EPRO_INT_STATUS_ADDR (0xe74000) +#define NBL_EPRO_INT_STATUS_DEPTH (1) +#define NBL_EPRO_INT_STATUS_WIDTH (32) +#define NBL_EPRO_INT_STATUS_DWLEN (1) +union epro_int_status_u { + struct epro_int_status { + u32 fatal_err:1; /* [0] Default:0x0 RWC */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 RWC */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 RWC */ + u32 cif_err:1; /* [3] Default:0x0 RWC */ + u32 input_err:1; /* [4] Default:0x0 RWC */ + u32 cfg_err:1; /* [5] Default:0x0 RWC */ + u32 data_ucor_err:1; /* [6] Default:0x0 RWC */ + u32 data_cor_err:1; /* [7] Default:0x0 RWC */ + u32 rsv:24; /* [31:8] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_INT_STATUS_DWLEN]; +} __packed; + +#define NBL_EPRO_INT_MASK_ADDR (0xe74004) +#define NBL_EPRO_INT_MASK_DEPTH (1) +#define NBL_EPRO_INT_MASK_WIDTH (32) +#define NBL_EPRO_INT_MASK_DWLEN (1) +union epro_int_mask_u { + struct epro_int_mask { + u32 fatal_err:1; /* [0] Default:0x0 RW */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 RW */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 RW */ + u32 cif_err:1; /* [3] Default:0x0 RW */ + u32 input_err:1; /* [4] Default:0x0 RW */ + u32 cfg_err:1; /* [5] Default:0x0 RW */ + u32 data_ucor_err:1; /* [6] Default:0x0 RW */ + u32 data_cor_err:1; /* [7] Default:0x0 RW */ + u32 rsv:24; /* [31:8] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_INT_MASK_DWLEN]; +} __packed; + +#define NBL_EPRO_INT_SET_ADDR (0xe74008) +#define NBL_EPRO_INT_SET_DEPTH (1) +#define NBL_EPRO_INT_SET_WIDTH (32) +#define NBL_EPRO_INT_SET_DWLEN (1) +union epro_int_set_u { + struct epro_int_set { + u32 fatal_err:1; /* [0] Default:0x0 WO */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 WO */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 WO */ + u32 cif_err:1; /* [3] Default:0x0 WO */ + u32 input_err:1; /* [4] Default:0x0 WO */ + u32 cfg_err:1; /* [5] Default:0x0 WO */ + u32 data_ucor_err:1; /* [6] Default:0x0 WO */ + u32 data_cor_err:1; /* [7] Default:0x0 WO */ + u32 rsv:24; /* [31:8] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_INT_SET_DWLEN]; +} __packed; + +#define NBL_EPRO_INIT_DONE_ADDR (0xe7400c) +#define NBL_EPRO_INIT_DONE_DEPTH (1) +#define NBL_EPRO_INIT_DONE_WIDTH (32) +#define NBL_EPRO_INIT_DONE_DWLEN (1) +union epro_init_done_u { + struct epro_init_done { + u32 done:1; /* [0] Default:0x0 RO */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_INIT_DONE_DWLEN]; +} __packed; + +#define NBL_EPRO_CIF_ERR_INFO_ADDR (0xe74040) +#define NBL_EPRO_CIF_ERR_INFO_DEPTH (1) +#define NBL_EPRO_CIF_ERR_INFO_WIDTH (32) +#define NBL_EPRO_CIF_ERR_INFO_DWLEN (1) +union epro_cif_err_info_u { + struct epro_cif_err_info { + u32 addr:30; /* [29:0] Default:0x0 RO */ + u32 wr_err:1; /* [30] Default:0x0 RO */ + u32 ucor_err:1; /* [31] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_CIF_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_EPRO_CFG_ERR_INFO_ADDR (0xe74050) +#define NBL_EPRO_CFG_ERR_INFO_DEPTH (1) +#define NBL_EPRO_CFG_ERR_INFO_WIDTH (32) +#define NBL_EPRO_CFG_ERR_INFO_DWLEN (1) +union epro_cfg_err_info_u { + struct epro_cfg_err_info { + u32 addr:10; /* [9:0] Default:0x0 RO */ + u32 id:3; /* [12:10] Default:0x0 RO */ + u32 rsv:19; /* [31:13] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_CFG_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_EPRO_CAR_CTRL_ADDR (0xe74100) +#define NBL_EPRO_CAR_CTRL_DEPTH (1) +#define NBL_EPRO_CAR_CTRL_WIDTH (32) +#define NBL_EPRO_CAR_CTRL_DWLEN (1) +union epro_car_ctrl_u { + struct epro_car_ctrl { + u32 sctr_car:1; /* [0] Default:0x1 RW */ + u32 rctr_car:1; /* [1] Default:0x1 RW */ + u32 rc_car:1; /* [2] Default:0x1 RW */ + u32 tbl_rc_car:1; /* [3] Default:0x1 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_CAR_CTRL_DWLEN]; +} __packed; + +#define NBL_EPRO_INIT_START_ADDR (0xe74180) +#define NBL_EPRO_INIT_START_DEPTH (1) +#define NBL_EPRO_INIT_START_WIDTH (32) +#define NBL_EPRO_INIT_START_DWLEN (1) +union epro_init_start_u { + struct epro_init_start { + u32 start:1; /* [0] Default:0x0 WO */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_INIT_START_DWLEN]; +} __packed; + +#define NBL_EPRO_FLAG_SEL_ADDR (0xe74200) +#define NBL_EPRO_FLAG_SEL_DEPTH (1) +#define NBL_EPRO_FLAG_SEL_WIDTH (32) +#define NBL_EPRO_FLAG_SEL_DWLEN (1) +union epro_flag_sel_u { + struct epro_flag_sel { + u32 dir_offset_en:1; /* [0] Default:0x1 RW */ + u32 dir_offset:5; /* [5:1] Default:0x0 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_FLAG_SEL_DWLEN]; +} __packed; + +#define NBL_EPRO_ACT_SEL_EN_ADDR (0xe74214) +#define NBL_EPRO_ACT_SEL_EN_DEPTH (1) +#define NBL_EPRO_ACT_SEL_EN_WIDTH (32) +#define NBL_EPRO_ACT_SEL_EN_DWLEN (1) +union epro_act_sel_en_u { + struct epro_act_sel_en { + u32 rssidx_en:1; /* [0] Default:0x1 RW */ + u32 dport_en:1; /* [1] Default:0x1 RW */ + u32 mirroridx_en:1; /* [2] Default:0x1 RW */ + u32 dqueue_en:1; /* [3] Default:0x1 RW */ + u32 encap_en:1; /* [4] Default:0x1 RW */ + u32 pop_8021q_en:1; /* [5] Default:0x1 RW */ + u32 pop_qinq_en:1; /* [6] Default:0x1 RW */ + u32 push_cvlan_en:1; /* [7] Default:0x1 RW */ + u32 push_svlan_en:1; /* [8] Default:0x1 RW */ + u32 replace_cvlan_en:1; /* [9] Default:0x1 RW */ + u32 replace_svlan_en:1; /* [10] Default:0x1 RW */ + u32 rsv:21; /* [31:11] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_ACT_SEL_EN_DWLEN]; +} __packed; + +#define NBL_EPRO_AM_ACT_ID0_ADDR (0xe74218) +#define NBL_EPRO_AM_ACT_ID0_DEPTH (1) +#define NBL_EPRO_AM_ACT_ID0_WIDTH (32) +#define NBL_EPRO_AM_ACT_ID0_DWLEN (1) +union epro_am_act_id0_u { + struct epro_am_act_id0 { + u32 replace_cvlan:6; /* [5:0] Default:0x2b RW */ + u32 rsv3:2; /* [7:6] Default:0x0 RO */ + u32 replace_svlan:6; /* [13:8] Default:0x2a RW */ + u32 rsv2:2; /* [15:14] Default:0x0 RO */ + u32 push_cvlan:6; /* [21:16] Default:0x2d RW */ + u32 rsv1:2; /* [23:22] Default:0x0 RO */ + u32 push_svlan:6; /* [29:24] Default:0x2c RW */ + u32 rsv:2; /* [31:30] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_AM_ACT_ID0_DWLEN]; +} __packed; + +#define NBL_EPRO_AM_ACT_ID1_ADDR (0xe7421c) +#define NBL_EPRO_AM_ACT_ID1_DEPTH (1) +#define NBL_EPRO_AM_ACT_ID1_WIDTH (32) +#define NBL_EPRO_AM_ACT_ID1_DWLEN (1) +union epro_am_act_id1_u { + struct epro_am_act_id1 { + u32 pop_qinq:6; /* [5:0] Default:0x29 RW */ + u32 rsv3:2; /* [7:6] Default:0x0 RO */ + u32 pop_8021q:6; /* [13:08] Default:0x28 RW */ + u32 rsv2:2; /* [15:14] Default:0x0 RO */ + u32 dport:6; /* [21:16] Default:0x9 RW */ + u32 rsv1:2; /* [23:22] Default:0x0 RO */ + u32 dqueue:6; /* [29:24] Default:0xa RW */ + u32 rsv:2; /* [31:30] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_AM_ACT_ID1_DWLEN]; +} __packed; + +#define NBL_EPRO_AM_ACT_ID2_ADDR (0xe74220) +#define NBL_EPRO_AM_ACT_ID2_DEPTH (1) +#define NBL_EPRO_AM_ACT_ID2_WIDTH (32) +#define NBL_EPRO_AM_ACT_ID2_DWLEN (1) +union epro_am_act_id2_u { + struct epro_am_act_id2 { + u32 rssidx:6; /* [5:0] Default:0x4 RW */ + u32 rsv3:2; /* [7:6] Default:0x0 RO */ + u32 mirroridx:6; /* [13:8] Default:0x8 RW */ + u32 rsv2:2; /* [15:14] Default:0x0 RO */ + u32 car:6; /* [21:16] Default:0x5 RW */ + u32 rsv1:2; /* [23:22] Default:0x0 RO */ + u32 encap:6; /* [29:24] Default:0x2e RW */ + u32 rsv:2; /* [31:30] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_AM_ACT_ID2_DWLEN]; +} __packed; + +#define NBL_EPRO_AM_ACT_ID3_ADDR (0xe74224) +#define NBL_EPRO_AM_ACT_ID3_DEPTH (1) +#define NBL_EPRO_AM_ACT_ID3_WIDTH (32) +#define NBL_EPRO_AM_ACT_ID3_DWLEN (1) +union epro_am_act_id3_u { + struct epro_am_act_id3 { + u32 outer_sport_mdf:6; /* [5:0] Default:0x30 RW */ + u32 rsv3:2; /* [7:6] Default:0x0 RO */ + u32 pri_mdf:6; /* [13:8] Default:0x15 RW */ + u32 rsv2:2; /* [15:14] Default:0x0 RO */ + u32 dp_hash0:6; /* [21:16] Default:0x13 RW */ + u32 rsv1:2; /* [23:22] Default:0x0 RO */ + u32 dp_hash1:6; /* [29:24] Default:0x14 RW */ + u32 rsv:2; /* [31:30] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_AM_ACT_ID3_DWLEN]; +} __packed; + +#define NBL_EPRO_ACTION_PRIORITY_ADDR (0xe74230) +#define NBL_EPRO_ACTION_PRIORITY_DEPTH (1) +#define NBL_EPRO_ACTION_PRIORITY_WIDTH (32) +#define NBL_EPRO_ACTION_PRIORITY_DWLEN (1) +union epro_action_priority_u { + struct epro_action_priority { + u32 mirroridx:2; /* [1:0] Default:0x0 RW */ + u32 car:2; /* [3:2] Default:0x0 RW */ + u32 dqueue:2; /* [5:4] Default:0x0 RW */ + u32 dport:2; /* [7:6] Default:0x0 RW */ + u32 pop_8021q:2; /* [9:8] Default:0x0 RW */ + u32 pop_qinq:2; /* [11:10] Default:0x0 RW */ + u32 replace_inner_vlan:2; /* [13:12] Default:0x0 RW */ + u32 replace_outer_vlan:2; /* [15:14] Default:0x0 RW */ + u32 push_inner_vlan:2; /* [17:16] Default:0x0 RW */ + u32 push_outer_vlan:2; /* [19:18] Default:0x0 RW */ + u32 outer_sport_mdf:2; /* [21:20] Default:0x0 RW */ + u32 pri_mdf:2; /* [23:22] Default:0x0 RW */ + u32 dp_hash0:2; /* [25:24] Default:0x0 RW */ + u32 dp_hash1:2; /* [27:26] Default:0x0 RW */ + u32 rsv:4; /* [31:28] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_ACTION_PRIORITY_DWLEN]; +} __packed; + +#define NBL_EPRO_MIRROR_ACTION_PRIORITY_ADDR (0xe74234) +#define NBL_EPRO_MIRROR_ACTION_PRIORITY_DEPTH (1) +#define NBL_EPRO_MIRROR_ACTION_PRIORITY_WIDTH (32) +#define NBL_EPRO_MIRROR_ACTION_PRIORITY_DWLEN (1) +union epro_mirror_action_priority_u { + struct epro_mirror_action_priority { + u32 car:2; /* [1:0] Default:0x0 RW */ + u32 dqueue:2; /* [3:2] Default:0x0 RW */ + u32 dport:2; /* [5:4] Default:0x0 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_MIRROR_ACTION_PRIORITY_DWLEN]; +} __packed; + +#define NBL_EPRO_SET_FLAGS_ADDR (0xe74238) +#define NBL_EPRO_SET_FLAGS_DEPTH (1) +#define NBL_EPRO_SET_FLAGS_WIDTH (32) +#define NBL_EPRO_SET_FLAGS_DWLEN (1) +union epro_set_flags_u { + struct epro_set_flags { + u32 set_flags:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_EPRO_SET_FLAGS_DWLEN]; +} __packed; + +#define NBL_EPRO_CLEAR_FLAGS_ADDR (0xe7423c) +#define NBL_EPRO_CLEAR_FLAGS_DEPTH (1) +#define NBL_EPRO_CLEAR_FLAGS_WIDTH (32) +#define NBL_EPRO_CLEAR_FLAGS_DWLEN (1) +union epro_clear_flags_u { + struct epro_clear_flags { + u32 clear_flags:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_EPRO_CLEAR_FLAGS_DWLEN]; +} __packed; + +#define NBL_EPRO_RSS_SK_ADDR (0xe74400) +#define NBL_EPRO_RSS_SK_DEPTH (1) +#define NBL_EPRO_RSS_SK_WIDTH (320) +#define NBL_EPRO_RSS_SK_DWLEN (10) +union epro_rss_sk_u { + struct epro_rss_sk { + u32 sk_arr[10]; /* [319:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_EPRO_RSS_SK_DWLEN]; +} __packed; + +#define NBL_EPRO_VXLAN_SP_ADDR (0xe74500) +#define NBL_EPRO_VXLAN_SP_DEPTH (1) +#define NBL_EPRO_VXLAN_SP_WIDTH (32) +#define NBL_EPRO_VXLAN_SP_DWLEN (1) +union epro_vxlan_sp_u { + struct epro_vxlan_sp { + u32 vxlan_tnl_sp_min:16; /* [15:0] Default:0x8000 RW */ + u32 vxlan_tnl_sp_max:16; /* [31:16] Default:0xee48 RW */ + } __packed info; + u32 data[NBL_EPRO_VXLAN_SP_DWLEN]; +} __packed; + +#define NBL_EPRO_LOOP_SCH_COS_DEFAULT_ADDR (0xe74600) +#define NBL_EPRO_LOOP_SCH_COS_DEFAULT_DEPTH (1) +#define NBL_EPRO_LOOP_SCH_COS_DEFAULT_WIDTH (32) +#define NBL_EPRO_LOOP_SCH_COS_DEFAULT_DWLEN (1) +union epro_loop_sch_cos_default_u { + struct epro_loop_sch_cos_default { + u32 sch_cos:3; /* [2:0] Default:0x0 RW */ + u32 pfc_mode:1; /* [3] Default:0x0 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_LOOP_SCH_COS_DEFAULT_DWLEN]; +} __packed; + +#define NBL_EPRO_MIRROR_PKT_COS_DEFAULT_ADDR (0xe74604) +#define NBL_EPRO_MIRROR_PKT_COS_DEFAULT_DEPTH (1) +#define NBL_EPRO_MIRROR_PKT_COS_DEFAULT_WIDTH (32) +#define NBL_EPRO_MIRROR_PKT_COS_DEFAULT_DWLEN (1) +union epro_mirror_pkt_cos_default_u { + struct epro_mirror_pkt_cos_default { + u32 pkt_cos:3; /* [2:0] Default:0x0 RW */ + u32 rsv:29; /* [31:3] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_MIRROR_PKT_COS_DEFAULT_DWLEN]; +} __packed; + +#define NBL_EPRO_NO_DPORT_REDIRECT_ADDR (0xe7463c) +#define NBL_EPRO_NO_DPORT_REDIRECT_DEPTH (1) +#define NBL_EPRO_NO_DPORT_REDIRECT_WIDTH (32) +#define NBL_EPRO_NO_DPORT_REDIRECT_DWLEN (1) +union epro_no_dport_redirect_u { + struct epro_no_dport_redirect { + u32 dport:16; /* [15:0] Default:0x0 RW */ + u32 dqueue:11; /* [26:16] Default:0x0 RW */ + u32 dqueue_en:1; /* [27] Default:0x0 RW */ + u32 rsv:4; /* [31:28] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_NO_DPORT_REDIRECT_DWLEN]; +} __packed; + +#define NBL_EPRO_SCH_COS_MAP_ETH0_ADDR (0xe74640) +#define NBL_EPRO_SCH_COS_MAP_ETH0_DEPTH (8) +#define NBL_EPRO_SCH_COS_MAP_ETH0_WIDTH (32) +#define NBL_EPRO_SCH_COS_MAP_ETH0_DWLEN (1) +union epro_sch_cos_map_eth0_u { + struct epro_sch_cos_map_eth0 { + u32 pkt_cos:3; /* [2:0] Default:0x0 RW */ + u32 dscp:6; /* [8:3] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_SCH_COS_MAP_ETH0_DWLEN]; +} __packed; +#define NBL_EPRO_SCH_COS_MAP_ETH0_REG(r) (NBL_EPRO_SCH_COS_MAP_ETH0_ADDR + \ + (NBL_EPRO_SCH_COS_MAP_ETH0_DWLEN * 4) * (r)) + +#define NBL_EPRO_SCH_COS_MAP_ETH1_ADDR (0xe74660) +#define NBL_EPRO_SCH_COS_MAP_ETH1_DEPTH (8) +#define NBL_EPRO_SCH_COS_MAP_ETH1_WIDTH (32) +#define NBL_EPRO_SCH_COS_MAP_ETH1_DWLEN (1) +union epro_sch_cos_map_eth1_u { + struct epro_sch_cos_map_eth1 { + u32 pkt_cos:3; /* [2:0] Default:0x0 RW */ + u32 dscp:6; /* [8:3] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_SCH_COS_MAP_ETH1_DWLEN]; +} __packed; +#define NBL_EPRO_SCH_COS_MAP_ETH1_REG(r) (NBL_EPRO_SCH_COS_MAP_ETH1_ADDR + \ + (NBL_EPRO_SCH_COS_MAP_ETH1_DWLEN * 4) * (r)) + +#define NBL_EPRO_SCH_COS_MAP_ETH2_ADDR (0xe74680) +#define NBL_EPRO_SCH_COS_MAP_ETH2_DEPTH (8) +#define NBL_EPRO_SCH_COS_MAP_ETH2_WIDTH (32) +#define NBL_EPRO_SCH_COS_MAP_ETH2_DWLEN (1) +union epro_sch_cos_map_eth2_u { + struct epro_sch_cos_map_eth2 { + u32 pkt_cos:3; /* [2:0] Default:0x0 RW */ + u32 dscp:6; /* [8:3] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_SCH_COS_MAP_ETH2_DWLEN]; +} __packed; +#define NBL_EPRO_SCH_COS_MAP_ETH2_REG(r) (NBL_EPRO_SCH_COS_MAP_ETH2_ADDR + \ + (NBL_EPRO_SCH_COS_MAP_ETH2_DWLEN * 4) * (r)) + +#define NBL_EPRO_SCH_COS_MAP_ETH3_ADDR (0xe746a0) +#define NBL_EPRO_SCH_COS_MAP_ETH3_DEPTH (8) +#define NBL_EPRO_SCH_COS_MAP_ETH3_WIDTH (32) +#define NBL_EPRO_SCH_COS_MAP_ETH3_DWLEN (1) +union epro_sch_cos_map_eth3_u { + struct epro_sch_cos_map_eth3 { + u32 pkt_cos:3; /* [2:0] Default:0x0 RW */ + u32 dscp:6; /* [8:3] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_SCH_COS_MAP_ETH3_DWLEN]; +} __packed; +#define NBL_EPRO_SCH_COS_MAP_ETH3_REG(r) (NBL_EPRO_SCH_COS_MAP_ETH3_ADDR + \ + (NBL_EPRO_SCH_COS_MAP_ETH3_DWLEN * 4) * (r)) + +#define NBL_EPRO_SCH_COS_MAP_LOOP_ADDR (0xe746c0) +#define NBL_EPRO_SCH_COS_MAP_LOOP_DEPTH (8) +#define NBL_EPRO_SCH_COS_MAP_LOOP_WIDTH (32) +#define NBL_EPRO_SCH_COS_MAP_LOOP_DWLEN (1) +union epro_sch_cos_map_loop_u { + struct epro_sch_cos_map_loop { + u32 pkt_cos:3; /* [2:0] Default:0x0 RW */ + u32 dscp:6; /* [8:3] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_SCH_COS_MAP_LOOP_DWLEN]; +} __packed; +#define NBL_EPRO_SCH_COS_MAP_LOOP_REG(r) (NBL_EPRO_SCH_COS_MAP_LOOP_ADDR + \ + (NBL_EPRO_SCH_COS_MAP_LOOP_DWLEN * 4) * (r)) + +#define NBL_EPRO_PORT_PRI_MDF_EN_ADDR (0xe746e0) +#define NBL_EPRO_PORT_PRI_MDF_EN_DEPTH (1) +#define NBL_EPRO_PORT_PRI_MDF_EN_WIDTH (32) +#define NBL_EPRO_PORT_PRI_MDF_EN_DWLEN (1) +union epro_port_pri_mdf_en_u { + struct epro_port_pri_mdf_en { + u32 eth0:1; /* [0] Default:0x0 RW */ + u32 eth1:1; /* [1] Default:0x0 RW */ + u32 eth2:1; /* [2] Default:0x0 RW */ + u32 eth3:1; /* [3] Default:0x0 RW */ + u32 loop:1; /* [4] Default:0x0 RW */ + u32 rsv:27; /* [31:5] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_PORT_PRI_MDF_EN_DWLEN]; +} __packed; + +#define NBL_EPRO_CFG_TEST_ADDR (0xe7480c) +#define NBL_EPRO_CFG_TEST_DEPTH (1) +#define NBL_EPRO_CFG_TEST_WIDTH (32) +#define NBL_EPRO_CFG_TEST_DWLEN (1) +union epro_cfg_test_u { + struct epro_cfg_test { + u32 test:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_EPRO_CFG_TEST_DWLEN]; +} __packed; + +#define NBL_EPRO_BP_STATE_ADDR (0xe74b00) +#define NBL_EPRO_BP_STATE_DEPTH (1) +#define NBL_EPRO_BP_STATE_WIDTH (32) +#define NBL_EPRO_BP_STATE_DWLEN (1) +union epro_bp_state_u { + struct epro_bp_state { + u32 in_bp:1; /* [0] Default:0x0 RO */ + u32 out_bp:1; /* [1] Default:0x0 RO */ + u32 inter_bp:1; /* [2] Default:0x0 RO */ + u32 rsv:29; /* [31:3] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_BP_STATE_DWLEN]; +} __packed; + +#define NBL_EPRO_BP_HISTORY_ADDR (0xe74b04) +#define NBL_EPRO_BP_HISTORY_DEPTH (1) +#define NBL_EPRO_BP_HISTORY_WIDTH (32) +#define NBL_EPRO_BP_HISTORY_DWLEN (1) +union epro_bp_history_u { + struct epro_bp_history { + u32 in_bp:1; /* [0] Default:0x0 RC */ + u32 out_bp:1; /* [1] Default:0x0 RC */ + u32 inter_bp:1; /* [2] Default:0x0 RC */ + u32 rsv:29; /* [31:3] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_BP_HISTORY_DWLEN]; +} __packed; + +#define NBL_EPRO_MT_ADDR (0xe75400) +#define NBL_EPRO_MT_DEPTH (16) +#define NBL_EPRO_MT_WIDTH (64) +#define NBL_EPRO_MT_DWLEN (2) +#define NBL_EPRO_MT_MAX (8) +union epro_mt_u { + struct epro_mt { + u32 dport:16; /* [15:0] Default:0x0 RW */ + u32 dqueue:11; /* [26:16] Default:0x0 RW */ + u32 car_en:1; /* [27] Default:0x0 RW */ + u32 car_id:10; /* [37:28] Default:0x0 RW */ + u32 vld:1; /* [38] Default:0x0 RW */ + u32 rsv:25; /* [63:39] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_MT_DWLEN]; +} __packed; +#define NBL_EPRO_MT_REG(r) (NBL_EPRO_MT_ADDR + \ + (NBL_EPRO_MT_DWLEN * 4) * (r)) + +#define NBL_EPRO_KG_TCAM_ADDR (0xe75480) +#define NBL_EPRO_KG_TCAM_DEPTH (16) +#define NBL_EPRO_KG_TCAM_WIDTH (64) +#define NBL_EPRO_KG_TCAM_DWLEN (2) +union epro_kg_tcam_u { + struct epro_kg_tcam { + u32 mask:16; /* [15:0] Default:0x0 RW */ + u32 data:16; /* [31:16] Default:0x0 RW */ + u32 valid_bit:1; /* [32] Default:0x0 RW */ + u32 rsv:31; /* [63:33] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_KG_TCAM_DWLEN]; +} __packed; +#define NBL_EPRO_KG_TCAM_REG(r) (NBL_EPRO_KG_TCAM_ADDR + \ + (NBL_EPRO_KG_TCAM_DWLEN * 4) * (r)) + +#define NBL_EPRO_VPT_ADDR (0xe78000) +#define NBL_EPRO_VPT_DEPTH (1024) +#define NBL_EPRO_VPT_WIDTH (64) +#define NBL_EPRO_VPT_DWLEN (2) +union epro_vpt_u { + struct epro_vpt { + u32 cvlan:16; /* [15:0] Default:0x0 RW */ + u32 svlan:16; /* [31:16] Default:0x0 RW */ + u32 fwd:1; /* [32] Default:0x0 RW */ + u32 mirror_en:1; /* [33] Default:0x0 RW */ + u32 mirror_id:4; /* [37:34] Default:0x0 RW */ + u32 car_en:1; /* [38] Default:0x0 RW */ + u32 car_id:10; /* [48:39] Default:0x0 RW */ + u32 pop_vlan:2; /* [50:49] Default:0x0 RW */ + u32 push_vlan:2; /* [52:51] Default:0x0 RW */ + u32 replace_vlan:2; /* [54:53] Default:0x0 RW */ + u32 rss_alg_sel:1; /* [55] Default:0x0 RW */ + u32 rss_key_type_btm:2; /* [57:56] Default:0x0 RW */ + u32 vld:1; /* [58] Default:0x0 RW */ + u32 rsv:5; /* [63:59] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_VPT_DWLEN]; +} __packed; +#define NBL_EPRO_VPT_REG(r) (NBL_EPRO_VPT_ADDR + \ + (NBL_EPRO_VPT_DWLEN * 4) * (r)) + +#define NBL_EPRO_EPT_ADDR (0xe75800) +#define NBL_EPRO_EPT_DEPTH (8) +#define NBL_EPRO_EPT_WIDTH (64) +#define NBL_EPRO_EPT_DWLEN (2) +union epro_ept_u { + struct epro_ept { + u32 cvlan:16; /* [15:0] Default:0x0 RW */ + u32 svlan:16; /* [31:16] Default:0x0 RW */ + u32 fwd:1; /* [32] Default:0x0 RW */ + u32 mirror_en:1; /* [33] Default:0x0 RW */ + u32 mirror_id:4; /* [37:34] Default:0x0 RW */ + u32 pop_vlan:2; /* [39:38] Default:0x0 RW */ + u32 push_vlan:2; /* [41:40] Default:0x0 RW */ + u32 replace_vlan:2; /* [43:42] Default:0x0 RW */ + u32 lag_alg_sel:2; /* [45:44] Default:0x0 RW */ + u32 lag_port_btm:4; /* [49:46] Default:0x0 RW */ + u32 lag_l2_protect_en:1; /* [50] Default:0x0 RW */ + u32 pfc_sch_cos_default:3; /* [53:51] Default:0x0 RW */ + u32 pfc_mode:1; /* [54] Default:0x0 RW */ + u32 vld:1; /* [55] Default:0x0 RW */ + u32 rsv:8; /* [63:56] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_EPT_DWLEN]; +} __packed; +#define NBL_EPRO_EPT_REG(r) (NBL_EPRO_EPT_ADDR + \ + (NBL_EPRO_EPT_DWLEN * 4) * (r)) + +#define NBL_EPRO_AFT_ADDR (0xe75900) +#define NBL_EPRO_AFT_DEPTH (16) +#define NBL_EPRO_AFT_WIDTH (64) +#define NBL_EPRO_AFT_DWLEN (2) +union epro_aft_u { + struct epro_aft { + u32 action_filter_btm_arr[2]; /* [63:0] Default:0x0 RW */ + } __packed info; + u64 data; +} __packed; +#define NBL_EPRO_AFT_REG(r) (NBL_EPRO_AFT_ADDR + \ + (NBL_EPRO_AFT_DWLEN * 4) * (r)) + +#define NBL_EPRO_RSS_PT_ADDR (0xe76000) +#define NBL_EPRO_RSS_PT_DEPTH (1024) +#define NBL_EPRO_RSS_PT_WIDTH (64) +#define NBL_EPRO_RSS_PT_DWLEN (2) +union epro_rss_pt_u { + struct epro_rss_pt { + u32 entry_size:3; /* [2:0] Default:0x0 RW */ + u32 offset1:14; /* [16:3] Default:0x0 RW */ + u32 offset1_vld:1; /* [17:17] Default:0x0 RW */ + u32 offset0:14; /* [31:18] Default:0x0 RW */ + u32 offset0_vld:1; /* [32] Default:0x0 RW */ + u32 vld:1; /* [33] Default:0x0 RW */ + u32 rsv:30; /* [63:34] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_RSS_PT_DWLEN]; +} __packed; +#define NBL_EPRO_RSS_PT_REG(r) (NBL_EPRO_RSS_PT_ADDR + \ + (NBL_EPRO_RSS_PT_DWLEN * 4) * (r)) + +#define NBL_EPRO_ECPVPT_ADDR (0xe7a000) +#define NBL_EPRO_ECPVPT_DEPTH (256) +#define NBL_EPRO_ECPVPT_WIDTH (32) +#define NBL_EPRO_ECPVPT_DWLEN (1) +union epro_ecpvpt_u { + struct epro_ecpvpt { + u32 encap_cvlan_vld0:1; /* [0] Default:0x0 RW */ + u32 encap_svlan_vld0:1; /* [1] Default:0x0 RW */ + u32 encap_vlan_vld1_15:30; /* [31:2] Default:0x0 RW */ + } __packed info; + u32 data[NBL_EPRO_ECPVPT_DWLEN]; +} __packed; +#define NBL_EPRO_ECPVPT_REG(r) (NBL_EPRO_ECPVPT_ADDR + \ + (NBL_EPRO_ECPVPT_DWLEN * 4) * (r)) + +#define NBL_EPRO_ECPIPT_ADDR (0xe7b000) +#define NBL_EPRO_ECPIPT_DEPTH (128) +#define NBL_EPRO_ECPIPT_WIDTH (32) +#define NBL_EPRO_ECPIPT_DWLEN (1) +union epro_ecpipt_u { + struct epro_ecpipt { + u32 encap_ip_type0:1; /* [0] Default:0x0 RW */ + u32 encap_ip_type1_31:31; /* [31:1] Default:0x0 RW */ + } __packed info; + u32 data[NBL_EPRO_ECPIPT_DWLEN]; +} __packed; +#define NBL_EPRO_ECPIPT_REG(r) (NBL_EPRO_ECPIPT_ADDR + \ + (NBL_EPRO_ECPIPT_DWLEN * 4) * (r)) + +#define NBL_EPRO_RSS_RET_ADDR (0xe7c000) +#define NBL_EPRO_RSS_RET_DEPTH (8192) +#define NBL_EPRO_RSS_RET_WIDTH (32) +#define NBL_EPRO_RSS_RET_DWLEN (1) +union epro_rss_ret_u { + struct epro_rss_ret { + u32 dqueue0:11; /* [10:0] Default:0x0 RW */ + u32 vld0:1; /* [11] Default:0x0 RW */ + u32 rsv1:4; /* [15:12] Default:0x0 RO */ + u32 dqueue1:11; /* [26:16] Default:0x0 RW */ + u32 vld1:1; /* [27] Default:0x0 RW */ + u32 rsv:4; /* [31:28] Default:0x0 RO */ + } __packed info; + u32 data[NBL_EPRO_RSS_RET_DWLEN]; +} __packed; +#define NBL_EPRO_RSS_RET_REG(r) (NBL_EPRO_RSS_RET_ADDR + \ + (NBL_EPRO_RSS_RET_DWLEN * 4) * (r)) + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h new file mode 100644 index 000000000000..5f74a458a09a --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/base/nbl_ppe_ipro.h @@ -0,0 +1,1397 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +// Code generated by interstellar. DO NOT EDIT. +// Compatible with leonis RTL tag 0710 + +#ifndef NBL_IPRO_H +#define NBL_IPRO_H 1 + +#include <linux/types.h> + +#define NBL_IPRO_BASE (0x00B04000) + +#define NBL_IPRO_INT_STATUS_ADDR (0xb04000) +#define NBL_IPRO_INT_STATUS_DEPTH (1) +#define NBL_IPRO_INT_STATUS_WIDTH (32) +#define NBL_IPRO_INT_STATUS_DWLEN (1) +union ipro_int_status_u { + struct ipro_int_status { + u32 fatal_err:1; /* [0] Default:0x0 RWC */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 RWC */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 RWC */ + u32 cif_err:1; /* [3] Default:0x0 RWC */ + u32 input_err:1; /* [4] Default:0x0 RWC */ + u32 cfg_err:1; /* [5] Default:0x0 RWC */ + u32 data_ucor_err:1; /* [6] Default:0x0 RWC */ + u32 rsv:25; /* [31:7] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_INT_STATUS_DWLEN]; +} __packed; + +#define NBL_IPRO_INT_MASK_ADDR (0xb04004) +#define NBL_IPRO_INT_MASK_DEPTH (1) +#define NBL_IPRO_INT_MASK_WIDTH (32) +#define NBL_IPRO_INT_MASK_DWLEN (1) +union ipro_int_mask_u { + struct ipro_int_mask { + u32 fatal_err:1; /* [0] Default:0x0 RW */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 RW */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 RW */ + u32 cif_err:1; /* [3] Default:0x0 RW */ + u32 input_err:1; /* [4] Default:0x0 RW */ + u32 cfg_err:1; /* [5] Default:0x0 RW */ + u32 data_ucor_err:1; /* [6] Default:0x0 RW */ + u32 rsv:25; /* [31:7] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_INT_MASK_DWLEN]; +} __packed; + +#define NBL_IPRO_INT_SET_ADDR (0xb04008) +#define NBL_IPRO_INT_SET_DEPTH (1) +#define NBL_IPRO_INT_SET_WIDTH (32) +#define NBL_IPRO_INT_SET_DWLEN (1) +union ipro_int_set_u { + struct ipro_int_set { + u32 fatal_err:1; /* [0] Default:0x0 WO */ + u32 fifo_uflw_err:1; /* [1] Default:0x0 WO */ + u32 fifo_dflw_err:1; /* [2] Default:0x0 WO */ + u32 cif_err:1; /* [3] Default:0x0 WO */ + u32 input_err:1; /* [4] Default:0x0 WO */ + u32 cfg_err:1; /* [5] Default:0x0 WO */ + u32 data_ucor_err:1; /* [6] Default:0x0 WO */ + u32 rsv:25; /* [31:7] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_INT_SET_DWLEN]; +} __packed; + +#define NBL_IPRO_INIT_DONE_ADDR (0xb0400c) +#define NBL_IPRO_INIT_DONE_DEPTH (1) +#define NBL_IPRO_INIT_DONE_WIDTH (32) +#define NBL_IPRO_INIT_DONE_DWLEN (1) +union ipro_init_done_u { + struct ipro_init_done { + u32 done:1; /* [0] Default:0x0 RO */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_INIT_DONE_DWLEN]; +} __packed; + +#define NBL_IPRO_CIF_ERR_INFO_ADDR (0xb04040) +#define NBL_IPRO_CIF_ERR_INFO_DEPTH (1) +#define NBL_IPRO_CIF_ERR_INFO_WIDTH (32) +#define NBL_IPRO_CIF_ERR_INFO_DWLEN (1) +union ipro_cif_err_info_u { + struct ipro_cif_err_info { + u32 addr:30; /* [29:0] Default:0x0 RO */ + u32 wr_err:1; /* [30] Default:0x0 RO */ + u32 ucor_err:1; /* [31] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_CIF_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_IPRO_INPUT_ERR_INFO_ADDR (0xb04048) +#define NBL_IPRO_INPUT_ERR_INFO_DEPTH (1) +#define NBL_IPRO_INPUT_ERR_INFO_WIDTH (32) +#define NBL_IPRO_INPUT_ERR_INFO_DWLEN (1) +union ipro_input_err_info_u { + struct ipro_input_err_info { + u32 id:2; /* [1:0] Default:0x0 RO */ + u32 rsv:30; /* [31:2] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_INPUT_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_IPRO_CFG_ERR_INFO_ADDR (0xb04050) +#define NBL_IPRO_CFG_ERR_INFO_DEPTH (1) +#define NBL_IPRO_CFG_ERR_INFO_WIDTH (32) +#define NBL_IPRO_CFG_ERR_INFO_DWLEN (1) +union ipro_cfg_err_info_u { + struct ipro_cfg_err_info { + u32 id:2; /* [1:0] Default:0x0 RO */ + u32 rsv:30; /* [31:2] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_CFG_ERR_INFO_DWLEN]; +} __packed; + +#define NBL_IPRO_CAR_CTRL_ADDR (0xb04100) +#define NBL_IPRO_CAR_CTRL_DEPTH (1) +#define NBL_IPRO_CAR_CTRL_WIDTH (32) +#define NBL_IPRO_CAR_CTRL_DWLEN (1) +union ipro_car_ctrl_u { + struct ipro_car_ctrl { + u32 sctr_car:1; /* [0] Default:0x1 RW */ + u32 rctr_car:1; /* [1] Default:0x1 RW */ + u32 rc_car:1; /* [2] Default:0x1 RW */ + u32 tbl_rc_car:1; /* [3] Default:0x1 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_CAR_CTRL_DWLEN]; +} __packed; + +#define NBL_IPRO_INIT_START_ADDR (0xb04180) +#define NBL_IPRO_INIT_START_DEPTH (1) +#define NBL_IPRO_INIT_START_WIDTH (32) +#define NBL_IPRO_INIT_START_DWLEN (1) +union ipro_init_start_u { + struct ipro_init_start { + u32 init_start:1; /* [0] Default:0x0 WO */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_INIT_START_DWLEN]; +} __packed; + +#define NBL_IPRO_CREDIT_TOKEN_ADDR (0xb041c0) +#define NBL_IPRO_CREDIT_TOKEN_DEPTH (1) +#define NBL_IPRO_CREDIT_TOKEN_WIDTH (32) +#define NBL_IPRO_CREDIT_TOKEN_DWLEN (1) +union ipro_credit_token_u { + struct ipro_credit_token { + u32 up_token_num:8; /* [7:0] Default:0x80 RW */ + u32 down_token_num:8; /* [15:8] Default:0x80 RW */ + u32 up_init_vld:1; /* [16] Default:0x0 WO */ + u32 down_init_vld:1; /* [17] Default:0x0 WO */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_CREDIT_TOKEN_DWLEN]; +} __packed; + +#define NBL_IPRO_AM_SET_FLAG_ADDR (0xb041e0) +#define NBL_IPRO_AM_SET_FLAG_DEPTH (1) +#define NBL_IPRO_AM_SET_FLAG_WIDTH (32) +#define NBL_IPRO_AM_SET_FLAG_DWLEN (1) +union ipro_am_set_flag_u { + struct ipro_am_set_flag { + u32 set_flag:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_AM_SET_FLAG_DWLEN]; +} __packed; + +#define NBL_IPRO_AM_CLEAR_FLAG_ADDR (0xb041e4) +#define NBL_IPRO_AM_CLEAR_FLAG_DEPTH (1) +#define NBL_IPRO_AM_CLEAR_FLAG_WIDTH (32) +#define NBL_IPRO_AM_CLEAR_FLAG_DWLEN (1) +union ipro_am_clear_flag_u { + struct ipro_am_clear_flag { + u32 clear_flag:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_AM_CLEAR_FLAG_DWLEN]; +} __packed; + +#define NBL_IPRO_FLAG_OFFSET_0_ADDR (0xb04200) +#define NBL_IPRO_FLAG_OFFSET_0_DEPTH (1) +#define NBL_IPRO_FLAG_OFFSET_0_WIDTH (32) +#define NBL_IPRO_FLAG_OFFSET_0_DWLEN (1) +union ipro_flag_offset_0_u { + struct ipro_flag_offset_0 { + u32 dir_offset_en:1; /* [0] Default:0x1 RW */ + u32 dir_offset:5; /* [5:1] Default:0x00 RW */ + u32 rsv1:2; /* [7:6] Default:0x0 RO */ + u32 hw_flow_offset_en:1; /* [8] Default:0x1 RW */ + u32 hw_flow_offset:5; /* [13:9] Default:0xb RW */ + u32 rsv:18; /* [31:14] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_FLAG_OFFSET_0_DWLEN]; +} __packed; + +#define NBL_IPRO_DROP_NXT_STAGE_ADDR (0xb04210) +#define NBL_IPRO_DROP_NXT_STAGE_DEPTH (1) +#define NBL_IPRO_DROP_NXT_STAGE_WIDTH (32) +#define NBL_IPRO_DROP_NXT_STAGE_DWLEN (1) +union ipro_drop_nxt_stage_u { + struct ipro_drop_nxt_stage { + u32 stage:4; /* [3:0] Default:0xf RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_DROP_NXT_STAGE_DWLEN]; +} __packed; + +#define NBL_IPRO_FWD_ACTION_PRI_ADDR (0xb04220) +#define NBL_IPRO_FWD_ACTION_PRI_DEPTH (1) +#define NBL_IPRO_FWD_ACTION_PRI_WIDTH (32) +#define NBL_IPRO_FWD_ACTION_PRI_DWLEN (1) +union ipro_fwd_action_pri_u { + struct ipro_fwd_action_pri { + u32 dqueue:2; /* [1:0] Default:0x0 RW */ + u32 set_dport:2; /* [3:2] Default:0x0 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_FWD_ACTION_PRI_DWLEN]; +} __packed; + +#define NBL_IPRO_MTU_CHECK_CTRL_ADDR (0xb0427c) +#define NBL_IPRO_MTU_CHECK_CTRL_DEPTH (1) +#define NBL_IPRO_MTU_CHECK_CTRL_WIDTH (32) +#define NBL_IPRO_MTU_CHECK_CTRL_DWLEN (1) +union ipro_mtu_check_ctrl_u { + struct ipro_mtu_check_ctrl { + u32 set_dport:16; /* [15:0] Default:0xFFFF RW */ + u32 set_dport_pri:2; /* [17:16] Default:0x3 RW */ + u32 proc_done:1; /* [18] Default:0x1 RW */ + u32 rsv:13; /* [31:19] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MTU_CHECK_CTRL_DWLEN]; +} __packed; + +#define NBL_IPRO_MTU_SEL_ADDR (0xb04280) +#define NBL_IPRO_MTU_SEL_DEPTH (8) +#define NBL_IPRO_MTU_SEL_WIDTH (32) +#define NBL_IPRO_MTU_SEL_DWLEN (1) +union ipro_mtu_sel_u { + struct ipro_mtu_sel { + u32 mtu_1:16; /* [15:0] Default:0x0 RW */ + u32 mtu_0:16; /* [31:16] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MTU_SEL_DWLEN]; +} __packed; +#define NBL_IPRO_MTU_SEL_REG(r) (NBL_IPRO_MTU_SEL_ADDR + \ + (NBL_IPRO_MTU_SEL_DWLEN * 4) * (r)) + +#define NBL_IPRO_UDL_PKT_FLT_DMAC_ADDR (0xb04300) +#define NBL_IPRO_UDL_PKT_FLT_DMAC_DEPTH (16) +#define NBL_IPRO_UDL_PKT_FLT_DMAC_WIDTH (64) +#define NBL_IPRO_UDL_PKT_FLT_DMAC_DWLEN (2) +union ipro_udl_pkt_flt_dmac_u { + struct ipro_udl_pkt_flt_dmac { + u32 dmac_l:32; /* [47:0] Default:0x0 RW */ + u32 dmac_h:16; /* [47:0] Default:0x0 RW */ + u32 rsv:16; /* [63:48] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_UDL_PKT_FLT_DMAC_DWLEN]; +} __packed; +#define NBL_IPRO_UDL_PKT_FLT_DMAC_REG(r) (NBL_IPRO_UDL_PKT_FLT_DMAC_ADDR + \ + (NBL_IPRO_UDL_PKT_FLT_DMAC_DWLEN * 4) * (r)) + +#define NBL_IPRO_UDL_PKT_FLT_VLAN_ADDR (0xb04380) +#define NBL_IPRO_UDL_PKT_FLT_VLAN_DEPTH (16) +#define NBL_IPRO_UDL_PKT_FLT_VLAN_WIDTH (32) +#define NBL_IPRO_UDL_PKT_FLT_VLAN_DWLEN (1) +union ipro_udl_pkt_flt_vlan_u { + struct ipro_udl_pkt_flt_vlan { + u32 vlan_0:12; /* [11:0] Default:0x0 RW */ + u32 vlan_1:12; /* [23:12] Default:0x0 RW */ + u32 vlan_layer:2; /* [25:24] Default:0x0 RW */ + u32 rsv:6; /* [31:26] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_UDL_PKT_FLT_VLAN_DWLEN]; +} __packed; +#define NBL_IPRO_UDL_PKT_FLT_VLAN_REG(r) (NBL_IPRO_UDL_PKT_FLT_VLAN_ADDR + \ + (NBL_IPRO_UDL_PKT_FLT_VLAN_DWLEN * 4) * (r)) + +#define NBL_IPRO_UDL_PKT_FLT_CTRL_ADDR (0xb043c0) +#define NBL_IPRO_UDL_PKT_FLT_CTRL_DEPTH (1) +#define NBL_IPRO_UDL_PKT_FLT_CTRL_WIDTH (32) +#define NBL_IPRO_UDL_PKT_FLT_CTRL_DWLEN (1) +union ipro_udl_pkt_flt_ctrl_u { + struct ipro_udl_pkt_flt_ctrl { + u32 vld:16; /* [15:0] Default:0x0 RW */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_UDL_PKT_FLT_CTRL_DWLEN]; +} __packed; + +#define NBL_IPRO_UDL_PKT_FLT_ACTION_ADDR (0xb043c4) +#define NBL_IPRO_UDL_PKT_FLT_ACTION_DEPTH (1) +#define NBL_IPRO_UDL_PKT_FLT_ACTION_WIDTH (32) +#define NBL_IPRO_UDL_PKT_FLT_ACTION_DWLEN (1) +union ipro_udl_pkt_flt_action_u { + struct ipro_udl_pkt_flt_action { + u32 dqueue:11; /* [10:0] Default:0x0 RW */ + u32 dqueue_en:1; /* [11] Default:0x0 RW */ + u32 rsv:2; /* [13:12] Default:0x0 RO */ + u32 proc_done:1; /* [14] Default:0x0 RW */ + u32 set_dport_en:1; /* [15] Default:0x0 RW */ + u32 set_dport:16; /* [31:16] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_UDL_PKT_FLT_ACTION_DWLEN]; +} __packed; + +#define NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_ADDR (0xb043e0) +#define NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_DEPTH (1) +#define NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_WIDTH (32) +#define NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_DWLEN (1) +union ipro_anti_fake_addr_errcode_u { + struct ipro_anti_fake_addr_errcode { + u32 num:4; /* [3:0] Default:0xA RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_ANTI_FAKE_ADDR_ERRCODE_DWLEN]; +} __packed; + +#define NBL_IPRO_ANTI_FAKE_ADDR_ACTION_ADDR (0xb043e4) +#define NBL_IPRO_ANTI_FAKE_ADDR_ACTION_DEPTH (1) +#define NBL_IPRO_ANTI_FAKE_ADDR_ACTION_WIDTH (32) +#define NBL_IPRO_ANTI_FAKE_ADDR_ACTION_DWLEN (1) +union ipro_anti_fake_addr_action_u { + struct ipro_anti_fake_addr_action { + u32 dqueue:11; /* [10:0] Default:0x0 RW */ + u32 dqueue_en:1; /* [11] Default:0x0 RW */ + u32 rsv:2; /* [13:12] Default:0x0 RO */ + u32 proc_done:1; /* [14] Default:0x1 RW */ + u32 set_dport_en:1; /* [15] Default:0x1 RW */ + u32 set_dport:16; /* [31:16] Default:0xFFFF RW */ + } __packed info; + u32 data[NBL_IPRO_ANTI_FAKE_ADDR_ACTION_DWLEN]; +} __packed; + +#define NBL_IPRO_VLAN_NUM_CHK_ERRCODE_ADDR (0xb043f0) +#define NBL_IPRO_VLAN_NUM_CHK_ERRCODE_DEPTH (1) +#define NBL_IPRO_VLAN_NUM_CHK_ERRCODE_WIDTH (32) +#define NBL_IPRO_VLAN_NUM_CHK_ERRCODE_DWLEN (1) +union ipro_vlan_num_chk_errcode_u { + struct ipro_vlan_num_chk_errcode { + u32 num:4; /* [3:0] Default:0x1 RW */ + u32 rsv:28; /* [31:4] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_VLAN_NUM_CHK_ERRCODE_DWLEN]; +} __packed; + +#define NBL_IPRO_VLAN_NUM_CHK_ACTION_ADDR (0xb043f4) +#define NBL_IPRO_VLAN_NUM_CHK_ACTION_DEPTH (1) +#define NBL_IPRO_VLAN_NUM_CHK_ACTION_WIDTH (32) +#define NBL_IPRO_VLAN_NUM_CHK_ACTION_DWLEN (1) +union ipro_vlan_num_chk_action_u { + struct ipro_vlan_num_chk_action { + u32 dqueue:11; /* [10:0] Default:0x0 RW */ + u32 dqueue_en:1; /* [11] Default:0x0 RW */ + u32 rsv:2; /* [13:12] Default:0x0 RO */ + u32 proc_done:1; /* [14] Default:0x1 RW */ + u32 set_dport_en:1; /* [15] Default:0x1 RW */ + u32 set_dport:16; /* [31:16] Default:0xFFFF RW */ + } __packed info; + u32 data[NBL_IPRO_VLAN_NUM_CHK_ACTION_DWLEN]; +} __packed; + +#define NBL_IPRO_TCP_STATE_PROBE_ADDR (0xb04400) +#define NBL_IPRO_TCP_STATE_PROBE_DEPTH (1) +#define NBL_IPRO_TCP_STATE_PROBE_WIDTH (32) +#define NBL_IPRO_TCP_STATE_PROBE_DWLEN (1) +union ipro_tcp_state_probe_u { + struct ipro_tcp_state_probe { + u32 up_chk_en:1; /* [0] Default:0x0 RW */ + u32 dn_chk_en:1; /* [1] Default:0x0 RW */ + u32 rsv:14; /* [15:2] Default:0x0 RO */ + u32 up_bitmap:8; /* [23:16] Default:0x0 RW */ + u32 dn_bitmap:8; /* [31:24] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_TCP_STATE_PROBE_DWLEN]; +} __packed; + +#define NBL_IPRO_TCP_STATE_UP_ACTION_ADDR (0xb04404) +#define NBL_IPRO_TCP_STATE_UP_ACTION_DEPTH (1) +#define NBL_IPRO_TCP_STATE_UP_ACTION_WIDTH (32) +#define NBL_IPRO_TCP_STATE_UP_ACTION_DWLEN (1) +union ipro_tcp_state_up_action_u { + struct ipro_tcp_state_up_action { + u32 dqueue:11; /* [10:0] Default:0x0 RW */ + u32 dqueue_en:1; /* [11] Default:0x0 RW */ + u32 rsv:2; /* [13:12] Default:0x0 RO */ + u32 proc_done:1; /* [14] Default:0x0 RW */ + u32 set_dport_en:1; /* [15] Default:0x0 RW */ + u32 set_dport:16; /* [31:16] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_TCP_STATE_UP_ACTION_DWLEN]; +} __packed; + +#define NBL_IPRO_TCP_STATE_DN_ACTION_ADDR (0xb04408) +#define NBL_IPRO_TCP_STATE_DN_ACTION_DEPTH (1) +#define NBL_IPRO_TCP_STATE_DN_ACTION_WIDTH (32) +#define NBL_IPRO_TCP_STATE_DN_ACTION_DWLEN (1) +union ipro_tcp_state_dn_action_u { + struct ipro_tcp_state_dn_action { + u32 dqueue:11; /* [10:0] Default:0x0 RW */ + u32 dqueue_en:1; /* [11] Default:0x0 RW */ + u32 rsv:2; /* [13:12] Default:0x0 RO */ + u32 proc_done:1; /* [14] Default:0x0 RW */ + u32 set_dport_en:1; /* [15] Default:0x0 RW */ + u32 set_dport:16; /* [31:16] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_TCP_STATE_DN_ACTION_DWLEN]; +} __packed; + +#define NBL_IPRO_FWD_ACTION_ID_ADDR (0xb04440) +#define NBL_IPRO_FWD_ACTION_ID_DEPTH (1) +#define NBL_IPRO_FWD_ACTION_ID_WIDTH (32) +#define NBL_IPRO_FWD_ACTION_ID_DWLEN (1) +union ipro_fwd_action_id_u { + struct ipro_fwd_action_id { + u32 mirror_index:6; /* [5:0] Default:0x8 RW */ + u32 dport:6; /* [11:6] Default:0x9 RW */ + u32 dqueue:6; /* [17:12] Default:0xA RW */ + u32 car:6; /* [23:18] Default:0x5 RW */ + u32 rsv:8; /* [31:24] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_FWD_ACTION_ID_DWLEN]; +} __packed; + +#define NBL_IPRO_PED_ACTION_ID_ADDR (0xb04448) +#define NBL_IPRO_PED_ACTION_ID_DEPTH (1) +#define NBL_IPRO_PED_ACTION_ID_WIDTH (32) +#define NBL_IPRO_PED_ACTION_ID_DWLEN (1) +union ipro_ped_action_id_u { + struct ipro_ped_action_id { + u32 encap:6; /* [5:0] Default:0x2E RW */ + u32 decap:6; /* [11:6] Default:0x2F RW */ + u32 rsv:20; /* [31:12] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_PED_ACTION_ID_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_HIT_ACTION_ADDR (0xb04510) +#define NBL_IPRO_MNG_HIT_ACTION_DEPTH (8) +#define NBL_IPRO_MNG_HIT_ACTION_WIDTH (32) +#define NBL_IPRO_MNG_HIT_ACTION_DWLEN (1) +union ipro_mng_hit_action_u { + struct ipro_mng_hit_action { + u32 data:24; /* [23:0] Default:0x0 RW */ + u32 rsv:8; /* [31:24] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_HIT_ACTION_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_HIT_ACTION_REG(r) (NBL_IPRO_MNG_HIT_ACTION_ADDR + \ + (NBL_IPRO_MNG_HIT_ACTION_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_DECISION_FLT_0_ADDR (0xb04530) +#define NBL_IPRO_MNG_DECISION_FLT_0_DEPTH (4) +#define NBL_IPRO_MNG_DECISION_FLT_0_WIDTH (32) +#define NBL_IPRO_MNG_DECISION_FLT_0_DWLEN (1) +union ipro_mng_decision_flt_0_u { + struct ipro_mng_decision_flt_0 { + u32 en:1; /* [0] Default:0x0 RW */ + u32 pkt_len_and:1; /* [1] Default:0x0 RW */ + u32 flow_ctrl_and:1; /* [2] Default:0x0 RW */ + u32 ncsi_and:1; /* [3] Default:0x0 RW */ + u32 eth_id:2; /* [5:4] Default:0x0 RW */ + u32 rsv:26; /* [31:6] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_DECISION_FLT_0_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_DECISION_FLT_0_REG(r) (NBL_IPRO_MNG_DECISION_FLT_0_ADDR + \ + (NBL_IPRO_MNG_DECISION_FLT_0_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_DECISION_FLT_1_ADDR (0xb04540) +#define NBL_IPRO_MNG_DECISION_FLT_1_DEPTH (4) +#define NBL_IPRO_MNG_DECISION_FLT_1_WIDTH (32) +#define NBL_IPRO_MNG_DECISION_FLT_1_DWLEN (1) +union ipro_mng_decision_flt_1_u { + struct ipro_mng_decision_flt_1 { + u32 dmac_and:4; /* [3:0] Default:0x0 RW */ + u32 brcast_and:1; /* [4] Default:0x0 RW */ + u32 mulcast_and:1; /* [5] Default:0x0 RW */ + u32 vlan_and:8; /* [13:6] Default:0x0 RW */ + u32 ipv4_dip_and:4; /* [17:14] Default:0x0 RW */ + u32 ipv6_dip_and:4; /* [21:18] Default:0x0 RW */ + u32 ethertype_and:4; /* [25:22] Default:0x0 RW */ + u32 brcast_or:1; /* [26] Default:0x0 RW */ + u32 icmpv4_or:1; /* [27] Default:0x0 RW */ + u32 mld_or:4; /* [31:28] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_DECISION_FLT_1_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_DECISION_FLT_1_REG(r) (NBL_IPRO_MNG_DECISION_FLT_1_ADDR + \ + (NBL_IPRO_MNG_DECISION_FLT_1_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_DECISION_FLT_2_ADDR (0xb04550) +#define NBL_IPRO_MNG_DECISION_FLT_2_DEPTH (4) +#define NBL_IPRO_MNG_DECISION_FLT_2_WIDTH (32) +#define NBL_IPRO_MNG_DECISION_FLT_2_DWLEN (1) +union ipro_mng_decision_flt_2_u { + struct ipro_mng_decision_flt_2 { + u32 neighbor_or:4; /* [3:0] Default:0x0 RW */ + u32 port_or:16; /* [19:4] Default:0x0 RW */ + u32 ethertype_or:4; /* [23:20] Default:0x0 RW */ + u32 arp_rsp_or:2; /* [25:24] Default:0x0 RW */ + u32 arp_req_or:2; /* [27:26] Default:0x0 RW */ + u32 dmac_or:4; /* [31:28] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_DECISION_FLT_2_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_DECISION_FLT_2_REG(r) (NBL_IPRO_MNG_DECISION_FLT_2_ADDR + \ + (NBL_IPRO_MNG_DECISION_FLT_2_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_DMAC_FLT_0_ADDR (0xb04560) +#define NBL_IPRO_MNG_DMAC_FLT_0_DEPTH (4) +#define NBL_IPRO_MNG_DMAC_FLT_0_WIDTH (32) +#define NBL_IPRO_MNG_DMAC_FLT_0_DWLEN (1) +union ipro_mng_dmac_flt_0_u { + struct ipro_mng_dmac_flt_0 { + u32 data:16; /* [15:0] Default:0x0 RW */ + u32 en:1; /* [16] Default:0x0 RW */ + u32 rsv:15; /* [31:17] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_DMAC_FLT_0_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_DMAC_FLT_0_REG(r) (NBL_IPRO_MNG_DMAC_FLT_0_ADDR + \ + (NBL_IPRO_MNG_DMAC_FLT_0_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_DMAC_FLT_1_ADDR (0xb04570) +#define NBL_IPRO_MNG_DMAC_FLT_1_DEPTH (4) +#define NBL_IPRO_MNG_DMAC_FLT_1_WIDTH (32) +#define NBL_IPRO_MNG_DMAC_FLT_1_DWLEN (1) +union ipro_mng_dmac_flt_1_u { + struct ipro_mng_dmac_flt_1 { + u32 data:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_DMAC_FLT_1_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_DMAC_FLT_1_REG(r) (NBL_IPRO_MNG_DMAC_FLT_1_ADDR + \ + (NBL_IPRO_MNG_DMAC_FLT_1_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_VLAN_FLT_ADDR (0xb04580) +#define NBL_IPRO_MNG_VLAN_FLT_DEPTH (8) +#define NBL_IPRO_MNG_VLAN_FLT_WIDTH (32) +#define NBL_IPRO_MNG_VLAN_FLT_DWLEN (1) +union ipro_mng_vlan_flt_u { + struct ipro_mng_vlan_flt { + u32 data:12; /* [11:0] Default:0x0 RW */ + u32 sel:1; /* [12] Default:0x0 RW */ + u32 nontag:1; /* [13] Default:0x0 RW */ + u32 en:1; /* [14] Default:0x0 RW */ + u32 rsv:17; /* [31:15] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_VLAN_FLT_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_VLAN_FLT_REG(r) (NBL_IPRO_MNG_VLAN_FLT_ADDR + \ + (NBL_IPRO_MNG_VLAN_FLT_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_ETHERTYPE_FLT_ADDR (0xb045a0) +#define NBL_IPRO_MNG_ETHERTYPE_FLT_DEPTH (4) +#define NBL_IPRO_MNG_ETHERTYPE_FLT_WIDTH (32) +#define NBL_IPRO_MNG_ETHERTYPE_FLT_DWLEN (1) +union ipro_mng_ethertype_flt_u { + struct ipro_mng_ethertype_flt { + u32 data:16; /* [15:0] Default:0x0 RW */ + u32 en:1; /* [16] Default:0x0 RW */ + u32 rsv:15; /* [31:17] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_ETHERTYPE_FLT_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_ETHERTYPE_FLT_REG(r) (NBL_IPRO_MNG_ETHERTYPE_FLT_ADDR + \ + (NBL_IPRO_MNG_ETHERTYPE_FLT_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_IPV4_FLT_0_ADDR (0xb045b0) +#define NBL_IPRO_MNG_IPV4_FLT_0_DEPTH (4) +#define NBL_IPRO_MNG_IPV4_FLT_0_WIDTH (32) +#define NBL_IPRO_MNG_IPV4_FLT_0_DWLEN (1) +union ipro_mng_ipv4_flt_0_u { + struct ipro_mng_ipv4_flt_0 { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_IPV4_FLT_0_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_IPV4_FLT_0_REG(r) (NBL_IPRO_MNG_IPV4_FLT_0_ADDR + \ + (NBL_IPRO_MNG_IPV4_FLT_0_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_IPV4_FLT_1_ADDR (0xb045c0) +#define NBL_IPRO_MNG_IPV4_FLT_1_DEPTH (4) +#define NBL_IPRO_MNG_IPV4_FLT_1_WIDTH (32) +#define NBL_IPRO_MNG_IPV4_FLT_1_DWLEN (1) +union ipro_mng_ipv4_flt_1_u { + struct ipro_mng_ipv4_flt_1 { + u32 data:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_IPV4_FLT_1_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_IPV4_FLT_1_REG(r) (NBL_IPRO_MNG_IPV4_FLT_1_ADDR + \ + (NBL_IPRO_MNG_IPV4_FLT_1_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_IPV6_FLT_0_ADDR (0xb04600) +#define NBL_IPRO_MNG_IPV6_FLT_0_DEPTH (4) +#define NBL_IPRO_MNG_IPV6_FLT_0_WIDTH (32) +#define NBL_IPRO_MNG_IPV6_FLT_0_DWLEN (1) +union ipro_mng_ipv6_flt_0_u { + struct ipro_mng_ipv6_flt_0 { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:15; /* [15:1] Default:0x0 RO */ + u32 mask:16; /* [31:16] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_IPV6_FLT_0_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_IPV6_FLT_0_REG(r) (NBL_IPRO_MNG_IPV6_FLT_0_ADDR + \ + (NBL_IPRO_MNG_IPV6_FLT_0_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_IPV6_FLT_1_ADDR (0xb04610) +#define NBL_IPRO_MNG_IPV6_FLT_1_DEPTH (4) +#define NBL_IPRO_MNG_IPV6_FLT_1_WIDTH (32) +#define NBL_IPRO_MNG_IPV6_FLT_1_DWLEN (1) +union ipro_mng_ipv6_flt_1_u { + struct ipro_mng_ipv6_flt_1 { + u32 data:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_IPV6_FLT_1_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_IPV6_FLT_1_REG(r) (NBL_IPRO_MNG_IPV6_FLT_1_ADDR + \ + (NBL_IPRO_MNG_IPV6_FLT_1_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_IPV6_FLT_2_ADDR (0xb04620) +#define NBL_IPRO_MNG_IPV6_FLT_2_DEPTH (4) +#define NBL_IPRO_MNG_IPV6_FLT_2_WIDTH (32) +#define NBL_IPRO_MNG_IPV6_FLT_2_DWLEN (1) +union ipro_mng_ipv6_flt_2_u { + struct ipro_mng_ipv6_flt_2 { + u32 data:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_IPV6_FLT_2_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_IPV6_FLT_2_REG(r) (NBL_IPRO_MNG_IPV6_FLT_2_ADDR + \ + (NBL_IPRO_MNG_IPV6_FLT_2_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_IPV6_FLT_3_ADDR (0xb04630) +#define NBL_IPRO_MNG_IPV6_FLT_3_DEPTH (4) +#define NBL_IPRO_MNG_IPV6_FLT_3_WIDTH (32) +#define NBL_IPRO_MNG_IPV6_FLT_3_DWLEN (1) +union ipro_mng_ipv6_flt_3_u { + struct ipro_mng_ipv6_flt_3 { + u32 data:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_IPV6_FLT_3_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_IPV6_FLT_3_REG(r) (NBL_IPRO_MNG_IPV6_FLT_3_ADDR + \ + (NBL_IPRO_MNG_IPV6_FLT_3_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_IPV6_FLT_4_ADDR (0xb04640) +#define NBL_IPRO_MNG_IPV6_FLT_4_DEPTH (4) +#define NBL_IPRO_MNG_IPV6_FLT_4_WIDTH (32) +#define NBL_IPRO_MNG_IPV6_FLT_4_DWLEN (1) +union ipro_mng_ipv6_flt_4_u { + struct ipro_mng_ipv6_flt_4 { + u32 data:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_IPV6_FLT_4_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_IPV6_FLT_4_REG(r) (NBL_IPRO_MNG_IPV6_FLT_4_ADDR + \ + (NBL_IPRO_MNG_IPV6_FLT_4_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_PORT_FLT_ADDR (0xb04650) +#define NBL_IPRO_MNG_PORT_FLT_DEPTH (16) +#define NBL_IPRO_MNG_PORT_FLT_WIDTH (32) +#define NBL_IPRO_MNG_PORT_FLT_DWLEN (1) +union ipro_mng_port_flt_u { + struct ipro_mng_port_flt { + u32 data:16; /* [15:0] Default:0x0 RW */ + u32 en:1; /* [16] Default:0x0 RW */ + u32 mode:1; /* [17] Default:0x0 RW */ + u32 tcp:1; /* [18] Default:0x0 RW */ + u32 udp:1; /* [19] Default:0x0 RW */ + u32 rsv:12; /* [31:20] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_PORT_FLT_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_PORT_FLT_REG(r) (NBL_IPRO_MNG_PORT_FLT_ADDR + \ + (NBL_IPRO_MNG_PORT_FLT_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_ARP_REQ_FLT_0_ADDR (0xb04690) +#define NBL_IPRO_MNG_ARP_REQ_FLT_0_DEPTH (2) +#define NBL_IPRO_MNG_ARP_REQ_FLT_0_WIDTH (32) +#define NBL_IPRO_MNG_ARP_REQ_FLT_0_DWLEN (1) +union ipro_mng_arp_req_flt_0_u { + struct ipro_mng_arp_req_flt_0 { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:15; /* [15:1] Default:0x0 RO */ + u32 op:16; /* [31:16] Default:0x1 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_ARP_REQ_FLT_0_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_ARP_REQ_FLT_0_REG(r) (NBL_IPRO_MNG_ARP_REQ_FLT_0_ADDR + \ + (NBL_IPRO_MNG_ARP_REQ_FLT_0_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_ARP_REQ_FLT_1_ADDR (0xb046a0) +#define NBL_IPRO_MNG_ARP_REQ_FLT_1_DEPTH (2) +#define NBL_IPRO_MNG_ARP_REQ_FLT_1_WIDTH (32) +#define NBL_IPRO_MNG_ARP_REQ_FLT_1_DWLEN (1) +union ipro_mng_arp_req_flt_1_u { + struct ipro_mng_arp_req_flt_1 { + u32 data:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_ARP_REQ_FLT_1_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_ARP_REQ_FLT_1_REG(r) (NBL_IPRO_MNG_ARP_REQ_FLT_1_ADDR + \ + (NBL_IPRO_MNG_ARP_REQ_FLT_1_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_ARP_RSP_FLT_0_ADDR (0xb046b0) +#define NBL_IPRO_MNG_ARP_RSP_FLT_0_DEPTH (2) +#define NBL_IPRO_MNG_ARP_RSP_FLT_0_WIDTH (32) +#define NBL_IPRO_MNG_ARP_RSP_FLT_0_DWLEN (1) +union ipro_mng_arp_rsp_flt_0_u { + struct ipro_mng_arp_rsp_flt_0 { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:15; /* [15:1] Default:0x0 RO */ + u32 op:16; /* [31:16] Default:0x2 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_ARP_RSP_FLT_0_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_ARP_RSP_FLT_0_REG(r) (NBL_IPRO_MNG_ARP_RSP_FLT_0_ADDR + \ + (NBL_IPRO_MNG_ARP_RSP_FLT_0_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_ARP_RSP_FLT_1_ADDR (0xb046c0) +#define NBL_IPRO_MNG_ARP_RSP_FLT_1_DEPTH (2) +#define NBL_IPRO_MNG_ARP_RSP_FLT_1_WIDTH (32) +#define NBL_IPRO_MNG_ARP_RSP_FLT_1_DWLEN (1) +union ipro_mng_arp_rsp_flt_1_u { + struct ipro_mng_arp_rsp_flt_1 { + u32 data:32; /* [31:0] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_MNG_ARP_RSP_FLT_1_DWLEN]; +} __packed; +#define NBL_IPRO_MNG_ARP_RSP_FLT_1_REG(r) (NBL_IPRO_MNG_ARP_RSP_FLT_1_ADDR + \ + (NBL_IPRO_MNG_ARP_RSP_FLT_1_DWLEN * 4) * (r)) + +#define NBL_IPRO_MNG_NEIGHBOR_FLT_86_ADDR (0xb046d0) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_86_DEPTH (1) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_86_WIDTH (32) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_86_DWLEN (1) +union ipro_mng_neighbor_flt_86_u { + struct ipro_mng_neighbor_flt_86 { + u32 data:8; /* [7:0] Default:0x86 RW */ + u32 en:1; /* [8] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_NEIGHBOR_FLT_86_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_NEIGHBOR_FLT_87_ADDR (0xb046d4) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_87_DEPTH (1) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_87_WIDTH (32) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_87_DWLEN (1) +union ipro_mng_neighbor_flt_87_u { + struct ipro_mng_neighbor_flt_87 { + u32 data:8; /* [7:0] Default:0x87 RW */ + u32 en:1; /* [8] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_NEIGHBOR_FLT_87_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_NEIGHBOR_FLT_88_ADDR (0xb046d8) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_88_DEPTH (1) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_88_WIDTH (32) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_88_DWLEN (1) +union ipro_mng_neighbor_flt_88_u { + struct ipro_mng_neighbor_flt_88 { + u32 data:8; /* [7:0] Default:0x88 RW */ + u32 en:1; /* [8] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_NEIGHBOR_FLT_88_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_NEIGHBOR_FLT_89_ADDR (0xb046dc) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_89_DEPTH (1) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_89_WIDTH (32) +#define NBL_IPRO_MNG_NEIGHBOR_FLT_89_DWLEN (1) +union ipro_mng_neighbor_flt_89_u { + struct ipro_mng_neighbor_flt_89 { + u32 data:8; /* [7:0] Default:0x89 RW */ + u32 en:1; /* [8] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_NEIGHBOR_FLT_89_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_MLD_FLT_82_ADDR (0xb046e0) +#define NBL_IPRO_MNG_MLD_FLT_82_DEPTH (1) +#define NBL_IPRO_MNG_MLD_FLT_82_WIDTH (32) +#define NBL_IPRO_MNG_MLD_FLT_82_DWLEN (1) +union ipro_mng_mld_flt_82_u { + struct ipro_mng_mld_flt_82 { + u32 data:8; /* [7:0] Default:0x82 RW */ + u32 en:1; /* [8] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_MLD_FLT_82_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_MLD_FLT_83_ADDR (0xb046e4) +#define NBL_IPRO_MNG_MLD_FLT_83_DEPTH (1) +#define NBL_IPRO_MNG_MLD_FLT_83_WIDTH (32) +#define NBL_IPRO_MNG_MLD_FLT_83_DWLEN (1) +union ipro_mng_mld_flt_83_u { + struct ipro_mng_mld_flt_83 { + u32 data:8; /* [7:0] Default:0x83 RW */ + u32 en:1; /* [8] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_MLD_FLT_83_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_MLD_FLT_84_ADDR (0xb046e8) +#define NBL_IPRO_MNG_MLD_FLT_84_DEPTH (1) +#define NBL_IPRO_MNG_MLD_FLT_84_WIDTH (32) +#define NBL_IPRO_MNG_MLD_FLT_84_DWLEN (1) +union ipro_mng_mld_flt_84_u { + struct ipro_mng_mld_flt_84 { + u32 data:8; /* [7:0] Default:0x84 RW */ + u32 en:1; /* [8] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_MLD_FLT_84_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_MLD_FLT_8F_ADDR (0xb046ec) +#define NBL_IPRO_MNG_MLD_FLT_8F_DEPTH (1) +#define NBL_IPRO_MNG_MLD_FLT_8F_WIDTH (32) +#define NBL_IPRO_MNG_MLD_FLT_8F_DWLEN (1) +union ipro_mng_mld_flt_8f_u { + struct ipro_mng_mld_flt_8f { + u32 data:8; /* [7:0] Default:0x8f RW */ + u32 en:1; /* [8] Default:0x0 RW */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_MLD_FLT_8F_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_ICMPV4_FLT_ADDR (0xb046f0) +#define NBL_IPRO_MNG_ICMPV4_FLT_DEPTH (1) +#define NBL_IPRO_MNG_ICMPV4_FLT_WIDTH (32) +#define NBL_IPRO_MNG_ICMPV4_FLT_DWLEN (1) +union ipro_mng_icmpv4_flt_u { + struct ipro_mng_icmpv4_flt { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_ICMPV4_FLT_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_BRCAST_FLT_ADDR (0xb04700) +#define NBL_IPRO_MNG_BRCAST_FLT_DEPTH (1) +#define NBL_IPRO_MNG_BRCAST_FLT_WIDTH (32) +#define NBL_IPRO_MNG_BRCAST_FLT_DWLEN (1) +union ipro_mng_brcast_flt_u { + struct ipro_mng_brcast_flt { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_BRCAST_FLT_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_MULCAST_FLT_ADDR (0xb04704) +#define NBL_IPRO_MNG_MULCAST_FLT_DEPTH (1) +#define NBL_IPRO_MNG_MULCAST_FLT_WIDTH (32) +#define NBL_IPRO_MNG_MULCAST_FLT_DWLEN (1) +union ipro_mng_mulcast_flt_u { + struct ipro_mng_mulcast_flt { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_MULCAST_FLT_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_FLOW_CTRL_FLT_ADDR (0xb04710) +#define NBL_IPRO_MNG_FLOW_CTRL_FLT_DEPTH (1) +#define NBL_IPRO_MNG_FLOW_CTRL_FLT_WIDTH (32) +#define NBL_IPRO_MNG_FLOW_CTRL_FLT_DWLEN (1) +union ipro_mng_flow_ctrl_flt_u { + struct ipro_mng_flow_ctrl_flt { + u32 data:16; /* [15:0] Default:0x8808 RW */ + u32 en:1; /* [16] Default:0x0 RW */ + u32 bow:1; /* [17] Default:0x0 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_FLOW_CTRL_FLT_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_NCSI_FLT_ADDR (0xb04714) +#define NBL_IPRO_MNG_NCSI_FLT_DEPTH (1) +#define NBL_IPRO_MNG_NCSI_FLT_WIDTH (32) +#define NBL_IPRO_MNG_NCSI_FLT_DWLEN (1) +union ipro_mng_ncsi_flt_u { + struct ipro_mng_ncsi_flt { + u32 data:16; /* [15:0] Default:0x88F8 RW */ + u32 en:1; /* [16] Default:0x0 RW */ + u32 bow:1; /* [17] Default:0x1 RW */ + u32 rsv:14; /* [31:18] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_NCSI_FLT_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_PKT_LEN_FLT_ADDR (0xb04720) +#define NBL_IPRO_MNG_PKT_LEN_FLT_DEPTH (1) +#define NBL_IPRO_MNG_PKT_LEN_FLT_WIDTH (32) +#define NBL_IPRO_MNG_PKT_LEN_FLT_DWLEN (1) +union ipro_mng_pkt_len_flt_u { + struct ipro_mng_pkt_len_flt { + u32 max:16; /* [15:0] Default:0x800 RW */ + u32 en:1; /* [16] Default:0x0 RW */ + u32 rsv:15; /* [31:17] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_PKT_LEN_FLT_DWLEN]; +} __packed; + +#define NBL_IPRO_FLOW_STOP_ADDR (0xb04810) +#define NBL_IPRO_FLOW_STOP_DEPTH (1) +#define NBL_IPRO_FLOW_STOP_WIDTH (32) +#define NBL_IPRO_FLOW_STOP_DWLEN (1) +union ipro_flow_stop_u { + struct ipro_flow_stop { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_FLOW_STOP_DWLEN]; +} __packed; + +#define NBL_IPRO_TOKEN_NUM_ADDR (0xb04814) +#define NBL_IPRO_TOKEN_NUM_DEPTH (1) +#define NBL_IPRO_TOKEN_NUM_WIDTH (32) +#define NBL_IPRO_TOKEN_NUM_DWLEN (1) +union ipro_token_num_u { + struct ipro_token_num { + u32 dn_cnt:8; /* [7:0] Default:0x80 RO */ + u32 up_cnt:8; /* [15:8] Default:0x80 RO */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_TOKEN_NUM_DWLEN]; +} __packed; + +#define NBL_IPRO_BYPASS_ADDR (0xb04818) +#define NBL_IPRO_BYPASS_DEPTH (1) +#define NBL_IPRO_BYPASS_WIDTH (32) +#define NBL_IPRO_BYPASS_DWLEN (1) +union ipro_bypass_u { + struct ipro_bypass { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_BYPASS_DWLEN]; +} __packed; + +#define NBL_IPRO_RR_REQ_MASK_ADDR (0xb0481c) +#define NBL_IPRO_RR_REQ_MASK_DEPTH (1) +#define NBL_IPRO_RR_REQ_MASK_WIDTH (32) +#define NBL_IPRO_RR_REQ_MASK_DWLEN (1) +union ipro_rr_req_mask_u { + struct ipro_rr_req_mask { + u32 dn:1; /* [0] Default:0x0 RW */ + u32 up:1; /* [1] Default:0x0 RW */ + u32 rsv:30; /* [31:2] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_RR_REQ_MASK_DWLEN]; +} __packed; + +#define NBL_IPRO_BP_STATE_ADDR (0xb04828) +#define NBL_IPRO_BP_STATE_DEPTH (1) +#define NBL_IPRO_BP_STATE_WIDTH (32) +#define NBL_IPRO_BP_STATE_DWLEN (1) +union ipro_bp_state_u { + struct ipro_bp_state { + u32 pp_up_link_fc:1; /* [0] Default:0x0 RO */ + u32 pp_dn_link_fc:1; /* [1] Default:0x0 RO */ + u32 pp_up_creadit:1; /* [2] Default:0x0 RO */ + u32 pp_dn_creadit:1; /* [3] Default:0x0 RO */ + u32 mcc_up_creadit:1; /* [4] Default:0x0 RO */ + u32 mcc_dn_creadit:1; /* [5] Default:0x0 RO */ + u32 pp_rdy:1; /* [6] Default:0x1 RO */ + u32 dn_rdy:1; /* [7] Default:0x1 RO */ + u32 up_rdy:1; /* [8] Default:0x1 RO */ + u32 rsv:23; /* [31:9] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_BP_STATE_DWLEN]; +} __packed; + +#define NBL_IPRO_BP_HISTORY_ADDR (0xb0482c) +#define NBL_IPRO_BP_HISTORY_DEPTH (1) +#define NBL_IPRO_BP_HISTORY_WIDTH (32) +#define NBL_IPRO_BP_HISTORY_DWLEN (1) +union ipro_bp_history_u { + struct ipro_bp_history { + u32 pp_rdy:1; /* [0] Default:0x0 RC */ + u32 dn_rdy:1; /* [1] Default:0x0 RC */ + u32 up_rdy:1; /* [2] Default:0x0 RC */ + u32 rsv:29; /* [31:3] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_BP_HISTORY_DWLEN]; +} __packed; + +#define NBL_IPRO_ERRCODE_TBL_DROP_ADDR (0xb0486c) +#define NBL_IPRO_ERRCODE_TBL_DROP_DEPTH (1) +#define NBL_IPRO_ERRCODE_TBL_DROP_WIDTH (32) +#define NBL_IPRO_ERRCODE_TBL_DROP_DWLEN (1) +union ipro_errcode_tbl_drop_u { + struct ipro_errcode_tbl_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_ERRCODE_TBL_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_SPORT_TBL_DROP_ADDR (0xb04870) +#define NBL_IPRO_SPORT_TBL_DROP_DEPTH (1) +#define NBL_IPRO_SPORT_TBL_DROP_WIDTH (32) +#define NBL_IPRO_SPORT_TBL_DROP_DWLEN (1) +union ipro_sport_tbl_drop_u { + struct ipro_sport_tbl_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_SPORT_TBL_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_PTYPE_TBL_DROP_ADDR (0xb04874) +#define NBL_IPRO_PTYPE_TBL_DROP_DEPTH (1) +#define NBL_IPRO_PTYPE_TBL_DROP_WIDTH (32) +#define NBL_IPRO_PTYPE_TBL_DROP_DWLEN (1) +union ipro_ptype_tbl_drop_u { + struct ipro_ptype_tbl_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_PTYPE_TBL_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_UDL_DROP_ADDR (0xb04878) +#define NBL_IPRO_UDL_DROP_DEPTH (1) +#define NBL_IPRO_UDL_DROP_WIDTH (32) +#define NBL_IPRO_UDL_DROP_DWLEN (1) +union ipro_udl_drop_u { + struct ipro_udl_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_UDL_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_ANTIFAKE_DROP_ADDR (0xb0487c) +#define NBL_IPRO_ANTIFAKE_DROP_DEPTH (1) +#define NBL_IPRO_ANTIFAKE_DROP_WIDTH (32) +#define NBL_IPRO_ANTIFAKE_DROP_DWLEN (1) +union ipro_antifake_drop_u { + struct ipro_antifake_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_ANTIFAKE_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_VLAN_NUM_DROP_ADDR (0xb04880) +#define NBL_IPRO_VLAN_NUM_DROP_DEPTH (1) +#define NBL_IPRO_VLAN_NUM_DROP_WIDTH (32) +#define NBL_IPRO_VLAN_NUM_DROP_DWLEN (1) +union ipro_vlan_num_drop_u { + struct ipro_vlan_num_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_VLAN_NUM_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_TCP_STATE_DROP_ADDR (0xb04884) +#define NBL_IPRO_TCP_STATE_DROP_DEPTH (1) +#define NBL_IPRO_TCP_STATE_DROP_WIDTH (32) +#define NBL_IPRO_TCP_STATE_DROP_DWLEN (1) +union ipro_tcp_state_drop_u { + struct ipro_tcp_state_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_TCP_STATE_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_RAM_ERR_DROP_ADDR (0xb04888) +#define NBL_IPRO_RAM_ERR_DROP_DEPTH (1) +#define NBL_IPRO_RAM_ERR_DROP_WIDTH (32) +#define NBL_IPRO_RAM_ERR_DROP_DWLEN (1) +union ipro_ram_err_drop_u { + struct ipro_ram_err_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_RAM_ERR_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_KG_MISS_ADDR (0xb0488c) +#define NBL_IPRO_KG_MISS_DEPTH (1) +#define NBL_IPRO_KG_MISS_WIDTH (32) +#define NBL_IPRO_KG_MISS_DWLEN (1) +union ipro_kg_miss_u { + struct ipro_kg_miss { + u32 drop_cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 cnt:16; /* [31:16] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_IPRO_KG_MISS_DWLEN]; +} __packed; + +#define NBL_IPRO_MNG_DROP_ADDR (0xb04890) +#define NBL_IPRO_MNG_DROP_DEPTH (1) +#define NBL_IPRO_MNG_DROP_WIDTH (32) +#define NBL_IPRO_MNG_DROP_DWLEN (1) +union ipro_mng_drop_u { + struct ipro_mng_drop { + u32 cnt:16; /* [15:0] Default:0x0 SCTR */ + u32 rsv:16; /* [31:16] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_MNG_DROP_DWLEN]; +} __packed; + +#define NBL_IPRO_MTU_CHECK_DROP_ADDR (0xb04900) +#define NBL_IPRO_MTU_CHECK_DROP_DEPTH (256) +#define NBL_IPRO_MTU_CHECK_DROP_WIDTH (32) +#define NBL_IPRO_MTU_CHECK_DROP_DWLEN (1) +union ipro_mtu_check_drop_u { + struct ipro_mtu_check_drop { + u32 vsi_3:8; /* [7:0] Default:0x0 SCTR */ + u32 vsi_2:8; /* [15:8] Default:0x0 SCTR */ + u32 vsi_1:8; /* [23:16] Default:0x0 SCTR */ + u32 vsi_0:8; /* [31:24] Default:0x0 SCTR */ + } __packed info; + u32 data[NBL_IPRO_MTU_CHECK_DROP_DWLEN]; +} __packed; +#define NBL_IPRO_MTU_CHECK_DROP_REG(r) (NBL_IPRO_MTU_CHECK_DROP_ADDR + \ + (NBL_IPRO_MTU_CHECK_DROP_DWLEN * 4) * (r)) + +#define NBL_IPRO_LAST_QUEUE_RAM_ERR_ADDR (0xb04d08) +#define NBL_IPRO_LAST_QUEUE_RAM_ERR_DEPTH (1) +#define NBL_IPRO_LAST_QUEUE_RAM_ERR_WIDTH (32) +#define NBL_IPRO_LAST_QUEUE_RAM_ERR_DWLEN (1) +union ipro_last_queue_ram_err_u { + struct ipro_last_queue_ram_err { + u32 info:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_LAST_QUEUE_RAM_ERR_DWLEN]; +} __packed; + +#define NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_ADDR (0xb04d0c) +#define NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_DEPTH (1) +#define NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_WIDTH (32) +#define NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_DWLEN (1) +union ipro_last_dn_src_port_ram_err_u { + struct ipro_last_dn_src_port_ram_err { + u32 info:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_LAST_DN_SRC_PORT_RAM_ERR_DWLEN]; +} __packed; + +#define NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_ADDR (0xb04d10) +#define NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_DEPTH (1) +#define NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_WIDTH (32) +#define NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_DWLEN (1) +union ipro_last_up_src_port_ram_err_u { + struct ipro_last_up_src_port_ram_err { + u32 info:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_LAST_UP_SRC_PORT_RAM_ERR_DWLEN]; +} __packed; + +#define NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_ADDR (0xb04d14) +#define NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_DEPTH (1) +#define NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_WIDTH (32) +#define NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_DWLEN (1) +union ipro_last_dn_ptype_ram_err_u { + struct ipro_last_dn_ptype_ram_err { + u32 info:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_LAST_DN_PTYPE_RAM_ERR_DWLEN]; +} __packed; + +#define NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_ADDR (0xb04d18) +#define NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_DEPTH (1) +#define NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_WIDTH (32) +#define NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_DWLEN (1) +union ipro_last_up_ptype_ram_err_u { + struct ipro_last_up_ptype_ram_err { + u32 info:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_LAST_UP_PTYPE_RAM_ERR_DWLEN]; +} __packed; + +#define NBL_IPRO_LAST_KG_PROF_RAM_ERR_ADDR (0xb04d20) +#define NBL_IPRO_LAST_KG_PROF_RAM_ERR_DEPTH (1) +#define NBL_IPRO_LAST_KG_PROF_RAM_ERR_WIDTH (32) +#define NBL_IPRO_LAST_KG_PROF_RAM_ERR_DWLEN (1) +union ipro_last_kg_prof_ram_err_u { + struct ipro_last_kg_prof_ram_err { + u32 info:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_LAST_KG_PROF_RAM_ERR_DWLEN]; +} __packed; + +#define NBL_IPRO_LAST_ERRCODE_RAM_ERR_ADDR (0xb04d28) +#define NBL_IPRO_LAST_ERRCODE_RAM_ERR_DEPTH (1) +#define NBL_IPRO_LAST_ERRCODE_RAM_ERR_WIDTH (32) +#define NBL_IPRO_LAST_ERRCODE_RAM_ERR_DWLEN (1) +union ipro_last_errcode_ram_err_u { + struct ipro_last_errcode_ram_err { + u32 info:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_LAST_ERRCODE_RAM_ERR_DWLEN]; +} __packed; + +#define NBL_IPRO_IN_PKT_CAP_EN_ADDR (0xb04dfc) +#define NBL_IPRO_IN_PKT_CAP_EN_DEPTH (1) +#define NBL_IPRO_IN_PKT_CAP_EN_WIDTH (32) +#define NBL_IPRO_IN_PKT_CAP_EN_DWLEN (1) +union ipro_in_pkt_cap_en_u { + struct ipro_in_pkt_cap_en { + u32 en:1; /* [0] Default:0x0 RW */ + u32 rsv:31; /* [31:1] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_IN_PKT_CAP_EN_DWLEN]; +} __packed; + +#define NBL_IPRO_IN_PKT_CAP_ADDR (0xb04e00) +#define NBL_IPRO_IN_PKT_CAP_DEPTH (64) +#define NBL_IPRO_IN_PKT_CAP_WIDTH (32) +#define NBL_IPRO_IN_PKT_CAP_DWLEN (1) +union ipro_in_pkt_cap_u { + struct ipro_in_pkt_cap { + u32 data:32; /* [31:0] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_IN_PKT_CAP_DWLEN]; +} __packed; +#define NBL_IPRO_IN_PKT_CAP_REG(r) (NBL_IPRO_IN_PKT_CAP_ADDR + \ + (NBL_IPRO_IN_PKT_CAP_DWLEN * 4) * (r)) + +#define NBL_IPRO_ERRCODE_TBL_ADDR (0xb05000) +#define NBL_IPRO_ERRCODE_TBL_DEPTH (16) +#define NBL_IPRO_ERRCODE_TBL_WIDTH (64) +#define NBL_IPRO_ERRCODE_TBL_DWLEN (2) +union ipro_errcode_tbl_u { + struct ipro_errcode_tbl { + u32 dqueue:11; /* [10:0] Default:0x0 RW */ + u32 dqueue_en:1; /* [11] Default:0x0 RW */ + u32 dqueue_pri:2; /* [13:12] Default:0x0 RW */ + u32 set_dport_pri:2; /* [15:14] Default:0x0 RW */ + u32 set_dport:16; /* [31:16] Default:0x0 RW */ + u32 set_dport_en:1; /* [32] Default:0x0 RW */ + u32 proc_done:1; /* [33] Default:0x0 RW */ + u32 vld:1; /* [34] Default:0x0 RW */ + u32 rsv:29; /* [63:35] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_ERRCODE_TBL_DWLEN]; +} __packed; +#define NBL_IPRO_ERRCODE_TBL_REG(r) (NBL_IPRO_ERRCODE_TBL_ADDR + \ + (NBL_IPRO_ERRCODE_TBL_DWLEN * 4) * (r)) + +#define NBL_IPRO_DN_PTYPE_TBL_ADDR (0xb06000) +#define NBL_IPRO_DN_PTYPE_TBL_DEPTH (256) +#define NBL_IPRO_DN_PTYPE_TBL_WIDTH (64) +#define NBL_IPRO_DN_PTYPE_TBL_DWLEN (2) +union ipro_dn_ptype_tbl_u { + struct ipro_dn_ptype_tbl { + u32 dn_entry_vld:1; /* [0] Default:0x0 RW */ + u32 dn_mirror_en:1; /* [1] Default:0x0 RW */ + u32 dn_mirror_pri:2; /* [3:2] Default:0x0 RW */ + u32 dn_mirror_id:4; /* [7:4] Default:0x0 RW */ + u32 dn_encap_en:1; /* [8] Default:0x0 RW */ + u32 dn_encap_pri:2; /* [10:9] Default:0x0 RW */ + u32 dn_encap_index:13; /* [23:11] Default:0x0 RW */ + u32 not_used_0:6; /* [29:24] Default:0x0 RW */ + u32 proc_done:1; /* [30] Default:0x0 RW */ + u32 set_dport_en:1; /* [31] Default:0x0 RW */ + u32 set_dport:16; /* [47:32] Default:0x0 RW */ + u32 set_dport_pri:2; /* [49:48] Default:0x0 RW */ + u32 dqueue_pri:2; /* [51:50] Default:0x0 RW */ + u32 dqueue:11; /* [62:52] Default:0x0 RW */ + u32 dqueue_en:1; /* [63] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_DN_PTYPE_TBL_DWLEN]; +} __packed; +#define NBL_IPRO_DN_PTYPE_TBL_REG(r) (NBL_IPRO_DN_PTYPE_TBL_ADDR + \ + (NBL_IPRO_DN_PTYPE_TBL_DWLEN * 4) * (r)) + +#define NBL_IPRO_UP_PTYPE_TBL_ADDR (0xb06800) +#define NBL_IPRO_UP_PTYPE_TBL_DEPTH (256) +#define NBL_IPRO_UP_PTYPE_TBL_WIDTH (64) +#define NBL_IPRO_UP_PTYPE_TBL_DWLEN (2) +union ipro_up_ptype_tbl_u { + struct ipro_up_ptype_tbl { + u32 up_entry_vld:1; /* [0] Default:0x0 RW */ + u32 up_mirror_en:1; /* [1] Default:0x0 RW */ + u32 up_mirror_pri:2; /* [3:2] Default:0x0 RW */ + u32 up_mirror_id:4; /* [7:4] Default:0x0 RW */ + u32 up_decap_en:1; /* [8] Default:0x0 RW */ + u32 up_decap_pri:2; /* [10:9] Default:0x0 RW */ + u32 not_used_1:19; /* [29:11] Default:0x0 RW */ + u32 proc_done:1; /* [30] Default:0x0 RW */ + u32 set_dport_en:1; /* [31] Default:0x0 RW */ + u32 set_dport:16; /* [47:32] Default:0x0 RW */ + u32 set_dport_pri:2; /* [49:48] Default:0x0 RW */ + u32 dqueue_pri:2; /* [51:50] Default:0x0 RW */ + u32 dqueue:11; /* [62:52] Default:0x0 RW */ + u32 dqueue_en:1; /* [63] Default:0x0 RW */ + } __packed info; + u32 data[NBL_IPRO_UP_PTYPE_TBL_DWLEN]; +} __packed; +#define NBL_IPRO_UP_PTYPE_TBL_REG(r) (NBL_IPRO_UP_PTYPE_TBL_ADDR + \ + (NBL_IPRO_UP_PTYPE_TBL_DWLEN * 4) * (r)) + +#define NBL_IPRO_QUEUE_TBL_ADDR (0xb08000) +#define NBL_IPRO_QUEUE_TBL_DEPTH (2048) +#define NBL_IPRO_QUEUE_TBL_WIDTH (32) +#define NBL_IPRO_QUEUE_TBL_DWLEN (1) +union ipro_queue_tbl_u { + struct ipro_queue_tbl { + u32 vsi:10; /* [9:0] Default:0x0 RW */ + u32 vsi_en:1; /* [10] Default:0x0 RW */ + u32 rsv:21; /* [31:11] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_QUEUE_TBL_DWLEN]; +} __packed; +#define NBL_IPRO_QUEUE_TBL_REG(r) (NBL_IPRO_QUEUE_TBL_ADDR + \ + (NBL_IPRO_QUEUE_TBL_DWLEN * 4) * (r)) + +#define NBL_IPRO_UP_SRC_PORT_TBL_ADDR (0xb0b000) +#define NBL_IPRO_UP_SRC_PORT_TBL_DEPTH (4) +#define NBL_IPRO_UP_SRC_PORT_TBL_WIDTH (64) +#define NBL_IPRO_UP_SRC_PORT_TBL_DWLEN (2) +union ipro_up_src_port_tbl_u { + struct ipro_up_src_port_tbl { + u32 entry_vld:1; /* [0] Default:0x0 RW */ + u32 vlan_layer_num_0:2; /* [2:1] Default:0x0 RW */ + u32 vlan_layer_num_1:2; /* [4:3] Default:0x0 RW */ + u32 lag_vld:1; /* [5] Default:0x0 RW */ + u32 lag_id:2; /* [7:6] Default:0x0 RW */ + u32 hw_flow:1; /* [8] Default:0x0 RW */ + u32 mirror_en:1; /* [9] Default:0x0 RW */ + u32 mirror_pr:2; /* [11:10] Default:0x0 RW */ + u32 mirror_id:4; /* [15:12] Default:0x0 RW */ + u32 dqueue_pri:2; /* [17:16] Default:0x0 RW */ + u32 set_dport_pri:2; /* [19:18] Default:0x0 RW */ + u32 dqueue:11; /* [30:20] Default:0x0 RW */ + u32 dqueue_en:1; /* [31] Default:0x0 RW */ + u32 set_dport:16; /* [47:32] Default:0x0 RW */ + u32 set_dport_en:1; /* [48] Default:0x0 RW */ + u32 proc_done:1; /* [49] Default:0x0 RW */ + u32 car_en:1; /* [50] Default:0x0 RW */ + u32 car_pr:2; /* [52:51] Default:0x0 RW */ + u32 car_id:10; /* [62:53] Default:0x0 RW */ + u32 rsv:1; /* [63] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_UP_SRC_PORT_TBL_DWLEN]; +} __packed; +#define NBL_IPRO_UP_SRC_PORT_TBL_REG(r) (NBL_IPRO_UP_SRC_PORT_TBL_ADDR + \ + (NBL_IPRO_UP_SRC_PORT_TBL_DWLEN * 4) * (r)) + +#define NBL_IPRO_DN_SRC_PORT_TBL_ADDR (0xb0c000) +#define NBL_IPRO_DN_SRC_PORT_TBL_DEPTH (1024) +#define NBL_IPRO_DN_SRC_PORT_TBL_WIDTH (128) +#define NBL_IPRO_DN_SRC_PORT_TBL_DWLEN (4) +union ipro_dn_src_port_tbl_u { + struct ipro_dn_src_port_tbl { + u32 entry_vld:1; /* [0] Default:0x0 RW */ + u32 mirror_en:1; /* [1] Default:0x0 RW */ + u32 mirror_pr:2; /* [3:2] Default:0x0 RW */ + u32 mirror_id:4; /* [7:4] Default:0x0 RW */ + u32 vlan_layer_num_1:2; /* [9:8] Default:0x0 RW */ + u32 hw_flow:1; /* [10] Default:0x0 RW */ + u32 mtu_sel:4; /* [14:11] Default:0x0 RW */ + u32 addr_check_en:1; /* [15] Default:0x0 RW */ + u32 smac_l:32; /* [63:16] Default:0x0 RW */ + u32 smac_h:16; /* [63:16] Default:0x0 RW */ + u32 dqueue:11; /* [74:64] Default:0x0 RW */ + u32 dqueue_en:1; /* [75] Default:0x0 RW */ + u32 dqueue_pri:2; /* [77:76] Default:0x0 RW */ + u32 set_dport_pri:2; /* [79:78] Default:0x0 RW */ + u32 set_dport:16; /* [95:80] Default:0x0 RW */ + u32 set_dport_en:1; /* [96] Default:0x0 RW */ + u32 proc_done:1; /* [97] Default:0x0 RW */ + u32 not_used_1:2; /* [99:98] Default:0x0 RW */ + u32 rsv:28; /* [127:100] Default:0x0 RO */ + } __packed info; + u32 data[NBL_IPRO_DN_SRC_PORT_TBL_DWLEN]; +} __packed; +#define NBL_IPRO_DN_SRC_PORT_TBL_REG(r) (NBL_IPRO_DN_SRC_PORT_TBL_ADDR + \ + (NBL_IPRO_DN_SRC_PORT_TBL_DWLEN * 4) * (r)) + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h index b078b765f772..b562b2426a5a 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.h @@ -8,6 +8,1707 @@ #define _NBL_HW_LEONIS_H_ #include "nbl_core.h" +#include "nbl_hw.h" #include "nbl_hw_reg.h" +#define NBL_NOTIFY_DELAY_MAX_TIME_FOR_REGS \ + 300 /* 300us for palladium,5us for s2c */ + +#define NBL_DRAIN_WAIT_TIMES (30000) + +/* ---------- FEM ---------- */ +#define NBL_FEM_INT_STATUS (NBL_PPE_FEM_BASE + 0x00000000) +#define NBL_FEM_INT_MASK (NBL_PPE_FEM_BASE + 0x00000004) +#define NBL_FEM_INIT_START (NBL_PPE_FEM_BASE + 0x00000180) +#define NBL_FEM_KT_ACC_DATA (NBL_PPE_FEM_BASE + 0x00000348) +#define NBL_FEM_INSERT_SEARCH0_CTRL (NBL_PPE_FEM_BASE + 0x00000500) +#define NBL_FEM_INSERT_SEARCH0_ACK (NBL_PPE_FEM_BASE + 0x00000504) +#define NBL_FEM_INSERT_SEARCH0_DATA (NBL_PPE_FEM_BASE + 0x00000508) +#define KT_MASK_LEN32_ACTION_INFO (0x0) +#define KT_MASK_LEN12_ACTION_INFO (0xFFFFF000) +#define NBL_FEM_SEARCH_KEY_LEN 44 +#define NBL_DRIVER_STATUS_REG (0x1300444) +#define NBL_DRIVER_STATUS_BIT (16) +#define NBL_HW_DUMMY_REG (0x1300904) + +#define HT_PORT0_BANK_SEL (0b01100000) +#define HT_PORT1_BANK_SEL (0b00011000) +#define HT_PORT2_BANK_SEL (0b00000111) +#define KT_PORT0_BANK_SEL (0b11100000) +#define KT_PORT1_BANK_SEL (0b00011000) +#define KT_PORT2_BANK_SEL (0b00000111) +#define AT_PORT0_BANK_SEL (0b000000000000) +#define AT_PORT1_BANK_SEL (0b111110000000) +#define AT_PORT2_BANK_SEL (0b000001111111) +#define HT_PORT0_BTM 2 +#define HT_PORT1_BTM 6 +#define HT_PORT2_BTM 16 +#define NBL_1BIT 1 +#define NBL_8BIT 8 +#define NBL_16BIT 16 + +#define NBL_FEM_HT_BANK_SEL_BITMAP (NBL_PPE_FEM_BASE + 0x00000200) +#define NBL_FEM_KT_BANK_SEL_BITMAP (NBL_PPE_FEM_BASE + 0x00000204) +#define NBL_FEM_AT_BANK_SEL_BITMAP (NBL_PPE_FEM_BASE + 0x00000208) +#define NBL_FEM_AT_BANK_SEL_BITMAP2 (NBL_PPE_FEM_BASE + 0x0000020C) + +#define NBL_EM_PT_MASK_LEN_0 (0xFFFFFFFF) +#define NBL_EM_PT_MASK_LEN_64 (0x0000FFFF) +#define NBL_EM_PT_MASK_LEN_96 (0x000000FF) +#define NBL_EM_PT_MASK1_LEN_0 (0xFFFFFFFF) +#define NBL_EM_PT_MASK1_LEN_4 (0x7FFFFFFF) +#define NBL_EM_PT_MASK1_LEN_12 (0x1FFFFFFF) +#define NBL_EM_PT_MASK1_LEN_20 (0x07FFFFFF) +#define NBL_EM_PT_MASK1_LEN_28 (0x01FFFFFF) +#define NBL_EM_PT_MASK1_LEN_32 (0x00FFFFFF) +#define NBL_EM_PT_MASK1_LEN_76 (0x00001FFF) +#define NBL_EM_PT_MASK1_LEN_112 (0x0000000F) +#define NBL_EM_PT_MASK1_LEN_116 (0x00000007) +#define NBL_EM_PT_MASK1_LEN_124 (0x00000001) +#define NBL_EM_PT_MASK1_LEN_128 (0x0) +#define NBL_EM_PT_MASK2_LEN_28 (0x000007FF) +#define NBL_EM_PT_MASK2_LEN_36 (0x000001FF) +#define NBL_EM_PT_MASK2_LEN_44 (0x0000007F) +#define NBL_EM_PT_MASK2_LEN_52 (0x0000001F) +#define NBL_EM_PT_MASK2_LEN_60 (0x00000007) +#define NBL_EM_PT_MASK2_LEN_68 (0x00000001) +#define NBL_EM_PT_MASK2_LEN_72 (0x00000010) +#define NBL_EM_PT_MASK2_SEC_72 (0x00000000) + +#define NBL_KT_HW_L2_DW_LEN 40 + +#define NBL_ACL_VSI_PF_UPCALL 9 +#define NBL_ACL_ETH_PF_UPCALL 8 +#define NBL_ACL_INDIRECT_ACCESS_WRITE (0) +#define NBL_ACL_INDIRECT_ACCESS_READ (1) +#define NBL_ETH_BASE_IDX 8 +#define NBL_VSI_BASE_IDX 0 +#define NBL_PF_MAX_NUM 4 +#define NBL_ACL_TCAM_UPCALL_IDX 15 + +#define NBL_GET_PF_ETH_ID(idx) ((idx) + NBL_ETH_BASE_IDX) +#define NBL_GET_PF_VSI_ID(idx) ((idx) * 256) +#define NBL_ACL_GET_ACTION_DATA(act_buf, act_data) \ + (act_data = (act_buf) & 0x3fffff) +#define NBL_ACL_FLUSH_FLOW_BTM 0x7fff +#define NBL_ACL_FLUSH_UPCALL_BTM 0x8000 + +#define NBL_ACL_TCAM_DATA_X(t) (NBL_PPE_ACL_BASE + 0x00000904 + ((t) * 8)) +#define NBL_ACL_TCAM_DATA_Y(t) (NBL_PPE_ACL_BASE + 0x00000990 + ((t) * 8)) + +/* ---------- MCC ---------- */ +#define NBL_MCC_MODULE (0x00B44000) +#define NBL_MCC_LEAF_NODE_TABLE(i) \ + (NBL_MCC_MODULE + 0x00010000 + (i) * sizeof(struct nbl_mcc_tbl)) +#pragma pack(1) + +struct nbl_fem_int_mask { + u32 rsv0:2; + u32 fifo_ovf_err:1; + u32 fifo_udf_err:1; + u32 cif_err:1; + u32 rsv1:1; + u32 cfg_err:1; + u32 data_ucor_err:1; + u32 bank_cflt_err:1; + u32 rsv2:23; +}; + +union nbl_fem_ht_acc_ctrl_u { + struct nbl_fem_ht_acc_ctrl { + u32 bucket_id:2; /* used for choose entry's hash-bucket */ + u32 entry_id:14; /* used for choose hash-bucket's entry */ + u32 ht_id:1; /* 0:HT0, 1:HT1 */ +#define NBL_ACC_HT0 (0) +#define NBL_ACC_HT1 (1) + u32 port:2; /* 0:pp0 1:pp1 2:pp2 */ + u32 rsv:10; + u32 access_size:1; /* 0:32bit 1:128bit,read support 128 */ +#define NBL_ACC_SIZE_32B (0) +#define NBL_ACC_SIZE_128B (1) + u32 rw:1; /* 1:read 0:write */ +#define NBL_ACC_MODE_READ (1) +#define NBL_ACC_MODE_WRITE (0) + u32 start:1; /* enable indirect access */ + } info; +#define NBL_FEM_HT_ACC_CTRL_TBL_WIDTH (sizeof(struct nbl_fem_ht_acc_ctrl)) + u8 data[NBL_FEM_HT_ACC_CTRL_TBL_WIDTH]; +}; + +#define NBL_FEM_HT_ACC_CTRL (NBL_PPE_FEM_BASE + 0x00000300) + +union nbl_fem_ht_acc_data_u { + struct nbl_fem_ht_acc_data { + u32 kt_index:17; + u32 hash:14; + u32 vld:1; + } info; +#define NBL_FEM_HT_ACC_DATA_TBL_WIDTH (sizeof(struct nbl_fem_ht_acc_data)) + u8 data[NBL_FEM_HT_ACC_DATA_TBL_WIDTH]; +}; + +#define NBL_FEM_HT_ACC_DATA (NBL_PPE_FEM_BASE + 0x00000308) + +union nbl_fem_ht_acc_ack_u { + struct nbl_fem_ht_acc_ack { + u32 done:1; /* indirect access is finished */ + u32 status:1; /* indirect access is error */ + u32 rsv:30; + } info; +#define NBL_FEM_HT_ACC_ACK_TBL_WIDTH (sizeof(struct nbl_fem_ht_acc_ack)) + u8 data[NBL_FEM_HT_ACC_ACK_TBL_WIDTH]; +}; + +#define NBL_FEM_HT_ACC_ACK (NBL_PPE_FEM_BASE + 0x00000304) + +union nbl_fem_kt_acc_ctrl_u { + struct nbl_fem_kt_acc_ctrl { + u32 addr:17; /* kt-index */ + u32 rsv:12; + u32 access_size:1; +#define NBL_ACC_SIZE_160B (0) +#define NBL_ACC_SIZE_320B (1) + u32 rw:1; /* 1:read 0:write */ + u32 start:1; /* enable ,indirect access */ + } info; +#define NBL_FEM_KT_ACC_CTRL_TBL_WIDTH (sizeof(struct nbl_fem_kt_acc_ctrl)) + u8 data[NBL_FEM_KT_ACC_CTRL_TBL_WIDTH]; +}; + +#define NBL_FEM_KT_ACC_CTRL (NBL_PPE_FEM_BASE + 0x00000340) + +union nbl_fem_kt_acc_ack_u { + struct nbl_fem_kt_acc_ack { + u32 done:1; /* indirect access is finished */ + u32 status:1; /* indirect access is error */ + u32 rsv:30; + } info; +#define NBL_FEM_KT_ACC_ACK_TBL_WIDTH (sizeof(struct nbl_fem_kt_acc_ack)) + u8 data[NBL_FEM_KT_ACC_ACK_TBL_WIDTH]; +}; + +#define NBL_FEM_KT_ACC_ACK (NBL_PPE_FEM_BASE + 0x00000344) + +union nbl_search_ctrl_u { + struct nbl_search_ctrl { + u32 rsv:31; + u32 start:1; + } info; +#define NBL_SEARCH_CTRL_WIDTH (sizeof(struct nbl_search_ctrl)) + u8 data[NBL_SEARCH_CTRL_WIDTH]; +}; + +union nbl_search_ack_u { + struct nbl_search_ack { + u32 done:1; + u32 status:1; + u32 rsv:30; + } info; +#define NBL_SEARCH_ACK_WIDTH (sizeof(struct nbl_search_ack)) + u8 data[NBL_SEARCH_ACK_WIDTH]; +}; + +#define NBL_FEM_EM0_TCAM_TABLE_ADDR (0xa0b000) +#define NBL_FEM_EM_TCAM_TABLE_DEPTH (64) +#define NBL_FEM_EM_TCAM_TABLE_WIDTH (256) + +union fem_em_tcam_table_u { + struct fem_em_tcam_table { + u32 key[5]; /* [159:0] Default:0x0 RW */ + u32 key_vld:1; /* [160] Default:0x0 RW */ + u32 key_size:1; /* [161] Default:0x0 RW */ + u32 rsv:30; /* [191:162] Default:0x0 RO */ + u32 rsv1[2]; /* [255:192] Default:0x0 RO */ + } info; + u32 data[NBL_FEM_EM_TCAM_TABLE_WIDTH / 32]; + u8 hash_key[sizeof(struct fem_em_tcam_table)]; +}; + +#define NBL_FEM_EM_TCAM_TABLE_REG(r, t) \ + (NBL_FEM_EM0_TCAM_TABLE_ADDR + 0x1000 * (r) + \ + (NBL_FEM_EM_TCAM_TABLE_WIDTH / 8) * (t)) + +#define NBL_FEM_EM0_AD_TABLE_ADDR (0xa08000) +#define NBL_FEM_EM_AD_TABLE_DEPTH (64) +#define NBL_FEM_EM_AD_TABLE_WIDTH (512) + +union fem_em_ad_table_u { + struct fem_em_ad_table { + u32 action0:22; /* [21:0] Default:0x0 RW */ + u32 action1:22; /* [43:22] Default:0x0 RW */ + u32 action2:22; /* [65:44] Default:0x0 RW */ + u32 action3:22; /* [87:66] Default:0x0 RW */ + u32 action4:22; /* [109:88] Default:0x0 RW */ + u32 action5:22; /* [131:110] Default:0x0 RW */ + u32 action6:22; /* [153:132] Default:0x0 RW */ + u32 action7:22; /* [175:154] Default:0x0 RW */ + u32 action8:22; /* [197:176] Default:0x0 RW */ + u32 action9:22; /* [219:198] Default:0x0 RW */ + u32 action10:22; /* [241:220] Default:0x0 RW */ + u32 action11:22; /* [263:242] Default:0x0 RW */ + u32 action12:22; /* [285:264] Default:0x0 RW */ + u32 action13:22; /* [307:286] Default:0x0 RW */ + u32 action14:22; /* [329:308] Default:0x0 RW */ + u32 action15:22; /* [351:330] Default:0x0 RW */ + u32 rsv[5]; /* [511:352] Default:0x0 RO */ + } info; + u32 data[NBL_FEM_EM_AD_TABLE_WIDTH / 32]; + u8 hash_key[sizeof(struct fem_em_ad_table)]; +}; + +#define NBL_FEM_EM_AD_TABLE_REG(r, t) \ + (NBL_FEM_EM0_AD_TABLE_ADDR + 0x1000 * (r) + \ + (NBL_FEM_EM_AD_TABLE_WIDTH / 8) * (t)) + +#define NBL_FLOW_TCAM_TOTAL_LEN 32 +#define NBL_FLOW_AD_TOTAL_LEN 64 + +struct nbl_mcc_tbl { + u32 dport_act:16; + u32 dqueue_act:11; + u32 dqueue_en:1; + u32 dqueue_rsv:4; + u32 stateid_act:11; + u32 stateid_filter:1; + u32 flowid_filter:1; + u32 stateid_rsv:3; + u32 next_pntr:13; + u32 tail:1; + u32 vld:1; + u32 rsv:1; +}; + +union nbl_fem_ht_size_table_u { + struct nbl_fem_ht_size_table { + u32 pp0_size:5; + u32 rsv0:3; + u32 pp1_size:5; + u32 rsv1:3; + u32 pp2_size:5; + u32 rsv2:11; + } info; +#define NBL_FEM_HT_SIZE_TBL_WIDTH (sizeof(struct nbl_fem_ht_size_table)) + u8 data[NBL_FEM_HT_SIZE_TBL_WIDTH]; +}; + +#define NBL_FEM_HT_SIZE_REG (NBL_PPE_FEM_BASE + 0x0000011c) + +#define NBL_FEM0_PROFILE_TABLE(t) \ + (NBL_PPE_FEM_BASE + 0x00001000 + (NBL_FEM_PROFILE_TBL_WIDTH) * (t)) + +/* ---------- REG BASE ADDR ---------- */ +#define NBL_LB_PCIEX16_TOP_BASE (0x01500000) +/* PPE modules base addr */ +#define NBL_PPE_FEM_BASE (0x00a04000) +#define NBL_PPE_IPRO_BASE (0x00b04000) +#define NBL_PPE_PP0_BASE (0x00b14000) +#define NBL_PPE_PP1_BASE (0x00b24000) +#define NBL_PPE_PP2_BASE (0x00b34000) +#define NBL_PPE_MCC_BASE (0x00b44000) +#define NBL_PPE_ACL_BASE (0x00b64000) +#define NBL_PPE_CAP_BASE (0x00e64000) +#define NBL_PPE_EPRO_BASE (0x00e74000) +#define NBL_PPE_DPRBAC_BASE (0x00904000) +#define NBL_PPE_UPRBAC_BASE (0x0000C000) +/* Interface modules base addr */ +#define NBL_INTF_HOST_PCOMPLETER_BASE (0x00f08000) +#define NBL_INTF_HOST_PADPT_BASE (0x00f4c000) +#define NBL_INTF_HOST_CTRLQ_BASE (0x00f8c000) +#define NBL_INTF_HOST_VDPA_NET_BASE (0x00f98000) +#define NBL_INTF_HOST_CMDQ_BASE (0x00fa0000) +#define NBL_INTF_HOST_MAILBOX_BASE (0x00fb0000) +#define NBL_INTF_HOST_PCIE_BASE (0X01504000) +#define NBL_INTF_HOST_PCAP_BASE (0X015a4000) +/* DP modules base addr */ +#define NBL_DP_URMUX_BASE (0x00008000) +#define NBL_DP_UPRBAC_BASE (0x0000C000) +#define NBL_DP_UPA_BASE (0x0008C000) +#define NBL_DP_USTORE_BASE (0x00104000) +#define NBL_DP_UPMEM_BASE (0x00108000) +#define NBL_DP_UBM_BASE (0x0010c000) +#define NBL_DP_UQM_BASE (0x00114000) +#define NBL_DP_USTAT_BASE (0x0011c000) +#define NBL_DP_UPED_BASE (0x0015c000) +#define NBL_DP_UCAR_BASE (0x00e84000) +#define NBL_DP_UL4S_BASE (0x00204000) +#define NBL_DP_UVN_BASE (0x00244000) +#define NBL_DP_DSCH_BASE (0x00404000) +#define NBL_DP_SHAPING_BASE (0x00504000) +#define NBL_DP_DVN_BASE (0x00514000) +#define NBL_DP_DL4S_BASE (0x00614000) +#define NBL_DP_DRMUX_BASE (0x00654000) +#define NBL_DP_DSTORE_BASE (0x00704000) +#define NBL_DP_DPMEM_BASE (0x00708000) +#define NBL_DP_DBM_BASE (0x0070c000) +#define NBL_DP_DQM_BASE (0x00714000) +#define NBL_DP_DSTAT_BASE (0x0071c000) +#define NBL_DP_DPED_BASE (0x0075c000) +#define NBL_DP_DPA_BASE (0x0085c000) +#define NBL_DP_DPRBAC_BASE (0x00904000) +#define NBL_DP_DDMUX_BASE (0x00984000) +#define NBL_DP_LB_DDP_BUF_BASE (0x00000000) +#define NBL_DP_LB_DDP_OUT_BASE (0x00000000) +#define NBL_DP_LB_DDP_DIST_BASE (0x00000000) +#define NBL_DP_LB_DDP_IN_BASE (0x00000000) +#define NBL_DP_LB_UDP_BUF_BASE (0x00000000) +#define NBL_DP_LB_UDP_OUT_BASE (0x00000000) +#define NBL_DP_LB_UDP_DIST_BASE (0x00000000) +#define NBL_DP_LB_UDP_IN_BASE (0x00000000) +#define NBL_DP_DL4S_BASE (0x00614000) +#define NBL_DP_UL4S_BASE (0x00204000) + +/* -------- LB -------- */ +#define NBL_LB_PF_CONFIGSPACE_SELECT_OFFSET (0x81100000) +#define NBL_LB_PF_CONFIGSPACE_SELECT_STRIDE (0x00100000) +#define NBL_LB_PF_CONFIGSPACE_BASE_ADDR (NBL_LB_PCIEX16_TOP_BASE + 0x00024000) +#define NBL_LB_PCIEX16_TOP_AHB (NBL_LB_PCIEX16_TOP_BASE + 0x00000020) + +#define NBL_SRIOV_CAPS_OFFSET (0x140) + +/* -------- MAILBOX BAR2 ----- */ +#define NBL_MAILBOX_NOTIFY_ADDR (0x00000000) +#define NBL_MAILBOX_BAR_REG (0x00000000) +#define NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR (0x10) +#define NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR (0x20) +#define NBL_MAILBOX_QINFO_CFG_DBG_TABLE_ADDR (0x30) + +/* -------- ADMINQ BAR2 ----- */ +#define NBL_ADMINQ_NOTIFY_ADDR (0x40) +#define NBL_ADMINQ_QINFO_CFG_RX_TABLE_ADDR (0x50) +#define NBL_ADMINQ_QINFO_CFG_TX_TABLE_ADDR (0x60) +#define NBL_ADMINQ_QINFO_CFG_DBG_TABLE_ADDR (0x78) +#define NBL_ADMINQ_MSIX_MAP_TABLE_ADDR (0x80) + +/* -------- MAILBOX -------- */ + +/* mailbox BAR qinfo_cfg_dbg_table */ +struct nbl_mailbox_qinfo_cfg_dbg_tbl { + u16 rx_drop; + u16 rx_get; + u16 tx_drop; + u16 tx_out; + u16 rx_hd_ptr; + u16 tx_hd_ptr; + u16 rx_tail_ptr; + u16 tx_tail_ptr; +}; + +/* mailbox BAR qinfo_cfg_table */ +struct nbl_mailbox_qinfo_cfg_table { + u32 queue_base_addr_l; + u32 queue_base_addr_h; + u32 queue_size_bwind:4; + u32 rsv1:28; + u32 queue_rst:1; + u32 queue_en:1; + u32 dif_err:1; + u32 ptr_err:1; + u32 rsv2:28; +}; + +/* -------- ADMINQ -------- */ +struct nbl_adminq_qinfo_map_table { + u32 function:3; + u32 devid:5; + u32 bus:8; + u32 msix_idx:13; + u32 msix_idx_valid:1; + u32 rsv:2; +}; + +/* -------- MAILBOX BAR0 ----- */ +/* mailbox qinfo_map_table */ +#define NBL_MAILBOX_QINFO_MAP_REG_ARR(func_id) \ + (NBL_INTF_HOST_MAILBOX_BASE + 0x00001000 + \ + (func_id) * sizeof(struct nbl_mailbox_qinfo_map_table)) + +/* MAILBOX qinfo_map_table */ +struct nbl_mailbox_qinfo_map_table { + u32 function:3; + u32 devid:5; + u32 bus:8; + u32 msix_idx:13; + u32 msix_idx_valid:1; + u32 rsv:2; +}; + +/* -------- HOST_PCIE -------- */ +#define NBL_PCIE_HOST_K_PF_MASK_REG (NBL_INTF_HOST_PCIE_BASE + 0x00001004) +#define NBL_PCIE_HOST_K_PF_FID(pf_id) \ + (NBL_INTF_HOST_PCIE_BASE + 0x0000106C + 4 * (pf_id)) +#define NBL_PCIE_HOST_TL_CFG_BUSDEV (NBL_INTF_HOST_PCIE_BASE + 0x11040) + +/* -------- HOST_PADPT -------- */ +#define NBL_HOST_PADPT_HOST_CFG_FC_PD_DN (NBL_INTF_HOST_PADPT_BASE + 0x00000160) +#define NBL_HOST_PADPT_HOST_CFG_FC_PH_DN (NBL_INTF_HOST_PADPT_BASE + 0x00000164) +#define NBL_HOST_PADPT_HOST_CFG_FC_NPH_DN \ + (NBL_INTF_HOST_PADPT_BASE + 0x0000016C) +#define NBL_HOST_PADPT_HOST_CFG_FC_CPLH_UP \ + (NBL_INTF_HOST_PADPT_BASE + 0x00000170) +/* host_padpt host_msix_info */ +#define NBL_PADPT_ABNORMAL_MSIX_VEC (NBL_INTF_HOST_PADPT_BASE + 0x00000200) +#define NBL_PADPT_ABNORMAL_TIMEOUT (NBL_INTF_HOST_PADPT_BASE + 0x00000204) +#define NBL_PADPT_HOST_MSIX_INFO_REG_ARR(vector_id) \ + (NBL_INTF_HOST_PADPT_BASE + 0x00010000 + \ + (vector_id) * sizeof(struct nbl_host_msix_info)) +/* host_padpt host_vnet_qinfo */ +#define NBL_PADPT_HOST_VNET_QINFO_REG_ARR(queue_id) \ + (NBL_INTF_HOST_PADPT_BASE + 0x00008000 + \ + (queue_id) * sizeof(struct nbl_host_vnet_qinfo)) + +struct nbl_host_msix_info { + u32 intrl_pnum:16; + u32 intrl_rate:16; + u32 function:3; + u32 devid:5; + u32 bus:8; + u32 valid:1; + u32 msix_mask_en:1; + u32 rsv:14; +}; + +struct nbl_abnormal_msix_vector { + u32 idx:16; + u32 vld:1; + u32 rsv:15; +}; + +/* host_padpt host_vnet_qinfo */ +struct nbl_host_vnet_qinfo { + u32 function_id:3; + u32 device_id:5; + u32 bus_id:8; + u32 msix_idx:13; + u32 msix_idx_valid:1; + u32 log_en:1; + u32 valid:1; + u32 tph_en:1; + u32 ido_en:1; + u32 rlo_en:1; + u32 rsv0:29; +}; + +struct nbl_msix_notify { + u32 glb_msix_idx:13; + u32 rsv1:3; + u32 mask:1; + u32 rsv2:15; +}; + +/* -------- HOST_PCOMPLETER -------- */ +/* pcompleter_host pcompleter_host_virtio_qid_map_table */ +#define NBL_PCOMPLETER_QID_MAP_REG_ARR(select, i) \ + (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00010000 + \ + (select) * NBL_QID_MAP_TABLE_ENTRIES * \ + sizeof(struct nbl_virtio_qid_map_table) + \ + (i) * sizeof(struct nbl_virtio_qid_map_table)) +#define NBL_PCOMPLETER_FUNCTION_MSIX_MAP_REG_ARR(i) \ + (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00004000 + \ + (i) * sizeof(struct nbl_function_msix_map)) +#define NBL_PCOMPLETER_HOST_MSIX_FID_TABLE(i) \ + (NBL_INTF_HOST_PCOMPLETER_BASE + 0x0003a000 + \ + (i) * sizeof(struct nbl_pcompleter_host_msix_fid_table)) +#define NBL_PCOMPLETER_INT_STATUS (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00000000) +#define NBL_PCOMPLETER_TLP_OUT_DROP_CNT \ + (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00002430) + +/* pcompleter_host pcompleter_host_virtio_table_ready */ +#define NBL_PCOMPLETER_QUEUE_TABLE_READY_REG \ + (NBL_INTF_HOST_PCOMPLETER_BASE + 0x0000110C) +/* pcompleter_host pcompleter_host_virtio_table_select */ +#define NBL_PCOMPLETER_QUEUE_TABLE_SELECT_REG \ + (NBL_INTF_HOST_PCOMPLETER_BASE + 0x00001110) + +#define NBL_PCOMPLETER_MSIX_NOTIRY_OFFSET (0x1020) + +#define NBL_REG_WRITE_MAX_TRY_TIMES 2 + +/* pcompleter_host virtio_qid_map_table */ +struct nbl_virtio_qid_map_table { + u32 local_qid:9; + u32 notify_addr_l:23; + u32 notify_addr_h; + u32 global_qid:12; + u32 ctrlq_flag:1; + u32 rsv1:19; + u32 rsv2; +}; + +struct nbl_pcompleter_host_msix_fid_table { + u32 fid:10; + u32 vld:1; + u32 rsv:21; +}; + +struct nbl_function_msix_map { + u64 msix_map_base_addr; + u32 function:3; + u32 devid:5; + u32 bus:8; + u32 valid:1; + u32 rsv0:15; + u32 rsv1; +}; + +struct nbl_queue_table_select { + u32 select:1; + u32 rsv:31; +}; + +struct nbl_queue_table_ready { + u32 ready:1; + u32 rsv:31; +}; + +/* IPRO ipro_queue_tbl */ +struct nbl_ipro_queue_tbl { + u32 vsi_id:10; + u32 vsi_en:1; + u32 rsv:21; +}; + +/* -------- HOST_PCAP -------- */ +#define NBL_HOST_PCAP_TX_CAP_EN (NBL_INTF_HOST_PCAP_BASE + 0x00000200) +#define NBL_HOST_PCAP_TX_CAP_STORE (NBL_INTF_HOST_PCAP_BASE + 0x00000204) +#define NBL_HOST_PCAP_TX_CAP_STALL (NBL_INTF_HOST_PCAP_BASE + 0x00000208) +#define NBL_HOST_PCAP_RX_CAP_EN (NBL_INTF_HOST_PCAP_BASE + 0x00000800) +#define NBL_HOST_PCAP_RX_CAP_STORE (NBL_INTF_HOST_PCAP_BASE + 0x00000804) +#define NBL_HOST_PCAP_RX_CAP_STALL (NBL_INTF_HOST_PCAP_BASE + 0x00000808) + +/* ---------- DPED ---------- */ +#define NBL_DPED_VLAN_OFFSET (NBL_DP_DPED_BASE + 0x000003F4) +#define NBL_DPED_DSCP_OFFSET_0 (NBL_DP_DPED_BASE + 0x000003F8) +#define NBL_DPED_DSCP_OFFSET_1 (NBL_DP_DPED_BASE + 0x000003FC) + +/* DPED dped_hw_edt_prof */ +#define NBL_DPED_HW_EDT_PROF_TABLE(i) \ + (NBL_DP_DPED_BASE + 0x00001000 + \ + (i) * sizeof(struct ped_hw_edit_profile)) +/* DPED dped_l4_ck_cmd_40 */ + +/* DPED hw_edt_prof/ UPED hw_edt_prof */ +struct ped_hw_edit_profile { + u32 l4_len:2; +#define NBL_PED_L4_LEN_MDY_CMD_0 (0) +#define NBL_PED_L4_LEN_MDY_CMD_1 (1) +#define NBL_PED_L4_LEN_MDY_DISABLE (2) + u32 l3_len:2; +#define NBL_PED_L3_LEN_MDY_CMD_0 (0) +#define NBL_PED_L3_LEN_MDY_CMD_1 (1) +#define NBL_PED_L3_LEN_MDY_DISABLE (2) + u32 l4_ck:3; +#define NBL_PED_L4_CKSUM_CMD_0 (0) +#define NBL_PED_L4_CKSUM_CMD_1 (1) +#define NBL_PED_L4_CKSUM_CMD_2 (2) +#define NBL_PED_L4_CKSUM_CMD_3 (3) +#define NBL_PED_L4_CKSUM_CMD_4 (4) +#define NBL_PED_L4_CKSUM_CMD_5 (5) +#define NBL_PED_L4_CKSUM_CMD_6 (6) +#define NBL_PED_L4_CKSUM_DISABLE (7) + u32 l3_ck:1; +#define NBL_PED_L3_CKSUM_ENABLE (1) +#define NBL_PED_L3_CKSUM_DISABLE (0) + u32 l4_ck_zero_free:1; +#define NBL_PED_L4_CKSUM_ZERO_FREE_ENABLE (1) +#define NBL_PED_L4_CKSUM_ZERO_FREE_DISABLE (0) + u32 rsv:23; +}; + +struct nbl_ped_hw_edit_profile_cfg { + u32 table_id; + struct ped_hw_edit_profile edit_prf; +}; + +/* ---------- UPED ---------- */ +/* UPED uped_hw_edt_prof */ +#define NBL_UPED_HW_EDT_PROF_TABLE(i) \ + (NBL_DP_UPED_BASE + 0x00001000 + \ + (i) * sizeof(struct ped_hw_edit_profile)) + +/* --------- SHAPING --------- */ +#define NBL_SHAPING_NET_TIMMING_ADD_ADDR (NBL_DP_SHAPING_BASE + 0x00000300) +#define NBL_SHAPING_NET(i) \ + (NBL_DP_SHAPING_BASE + 0x00001800 + \ + (i) * sizeof(struct nbl_shaping_net)) + +/* cir 1, bandwidth 1kB/s in protol environment */ +/* cir 1, bandwidth 1Mb/s */ +#define NBL_LR_LEONIS_SYS_CLK 15000.0 /* 0105tag Khz */ +#define NBL_LR_LEONIS_NET_SHAPING_CYCLE_MAX 25 +#define NBL_LR_LEONIS_NET_SHAPING_DPETH 600 +#define NBL_LR_LEONIS_NET_BUCKET_DEPTH 9600 + +#define NBL_SHAPING_DPORT_25G_RATE 0x61A8 +#define NBL_SHAPING_DPORT_HALF_25G_RATE 0x30D4 + +#define NBL_SHAPING_DPORT_100G_RATE 0x1A400 +#define NBL_SHAPING_DPORT_HALF_100G_RATE 0xD200 + +#define NBL_UCAR_MAX_BUCKET_DEPTH 524287 + +#define NBL_DSTORE_DROP_XOFF_TH 0xC8 +#define NBL_DSTORE_DROP_XON_TH 0x64 + +#define NBL_DSTORE_DROP_XOFF_TH_100G 0x1F4 +#define NBL_DSTORE_DROP_XON_TH_100G 0x12C + +#define NBL_DSTORE_DROP_XOFF_TH_BOND_MAIN 0x180 +#define NBL_DSTORE_DROP_XON_TH_BOND_MAIN 0x180 + +#define NBL_DSTORE_DROP_XOFF_TH_BOND_OTHER 0x64 +#define NBL_DSTORE_DROP_XON_TH_BOND_OTHER 0x64 + +#define NBL_DSTORE_DROP_XOFF_TH_100G_BOND_MAIN 0x2D5 +#define NBL_DSTORE_DROP_XON_TH_100G_BOND_MAIN 0x2BC + +#define NBL_DSTORE_DROP_XOFF_TH_100G_BOND_OTHER 0x145 +#define NBL_DSTORE_DROP_XON_TH_100G_BOND_OTHER 0x12C + +#define NBL_DSTORE_DISC_BP_TH (NBL_DP_DSTORE_BASE + 0x00000630) + +struct dstore_disc_bp_th { + u32 xoff_th:10; + u32 rsv1:6; + u32 xon_th:10; + u32 rsv:5; + u32 en:1; +}; + +/* DSCH dsch_vn_sha2net_map_tbl */ +struct dsch_vn_sha2net_map_tbl { + u32 vld:1; + u32 reserve:31; +}; + +/* DSCH dsch_vn_net2sha_map_tbl */ +struct dsch_vn_net2sha_map_tbl { + u32 vld:1; + u32 reserve:31; +}; + +#define NBL_NET_SHAPING_RDMA_BASE_ID (448) + +struct dsch_psha_en { + u32 en:4; + u32 rsv:28; +}; + +/* SHAPING shaping_net */ +struct nbl_shaping_net { + u32 valid:1; + u32 depth:19; + u32 cir:19; + u32 pir:19; + u32 cbs:21; + u32 pbs:21; + u32 rsv:28; +}; + +struct nbl_shaping_dport { + u32 valid:1; + u32 depth:19; + u32 cir:19; + u32 pir:19; + u32 cbs:21; + u32 pbs:21; + u32 rsv:28; +}; + +struct nbl_shaping_dvn_dport { + u32 valid:1; + u32 depth:19; + u32 cir:19; + u32 pir:19; + u32 cbs:21; + u32 pbs:21; + u32 rsv:28; +}; + +/* ---------- DSCH ---------- */ +/* DSCH vn_host_qid_max */ +#define NBL_DSCH_NOTIFY_BITMAP_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00003000 + (i) * BYTES_PER_DWORD) +#define NBL_DSCH_FLY_BITMAP_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00004000 + (i) * BYTES_PER_DWORD) +#define NBL_DSCH_PORT_MAP_REG_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00005000 + (i) * sizeof(struct nbl_port_map)) +/* DSCH dsch_vn_q2tc_cfg_tbl */ +#define NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00010000 + \ + (i) * sizeof(struct dsch_vn_q2tc_cfg_tbl)) +/* DSCH dsch_vn_n2g_cfg_tbl */ +#define NBL_DSCH_VN_N2G_CFG_TABLE_REG_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00060000 + \ + (i) * sizeof(struct dsch_vn_n2g_cfg_tbl)) +/* DSCH dsch_vn_g2p_cfg_tbl */ +#define NBL_DSCH_VN_G2P_CFG_TABLE_REG_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00064000 + \ + (i) * sizeof(struct dsch_vn_g2p_cfg_tbl)) +/* DSCH dsch_vn_sha2net_map_tbl */ +#define NBL_DSCH_VN_SHA2NET_MAP_TABLE_REG_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00070000 + \ + (i) * sizeof(struct dsch_vn_sha2net_map_tbl)) +/* DSCH dsch_vn_net2sha_map_tbl */ +#define NBL_DSCH_VN_NET2SHA_MAP_TABLE_REG_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00074000 + \ + (i) * sizeof(struct dsch_vn_net2sha_map_tbl)) +/* DSCH dsch_vn_tc_q_list_tbl */ +#define NBL_DSCH_VN_TC_Q_LIST_TABLE_REG_ARR(i) \ + (NBL_DP_DSCH_BASE + 0x00040000 + \ + (i) * sizeof(struct dsch_vn_tc_q_list_tbl)) +/* DSCH dsch maxqid */ +#define NBL_DSCH_HOST_QID_MAX (NBL_DP_DSCH_BASE + 0x00000118) +#define NBL_DSCH_VN_QUANTA_ADDR (NBL_DP_DSCH_BASE + 0x00000134) +#define NBL_DSCH_INT_STATUS (NBL_DP_DSCH_BASE + 0x00000000) +#define NBL_DSCH_RDMA_OTHER_ABN (NBL_DP_DSCH_BASE + 0x00000080) +#define NBL_DSCH_RDMA_OTHER_ABN_BIT (0x4000) +#define NBL_DSCH_RDMA_DPQM_DB_LOST (2) + +#define NBL_MAX_QUEUE_ID (0x7ff) +#define NBL_HOST_QUANTA (0x8000) +#define NBL_ECPU_QUANTA (0x1000) + +/* DSCH dsch_vn_q2tc_cfg_tbl */ +struct dsch_vn_q2tc_cfg_tbl { + u32 tcid:13; + u32 rsv:18; + u32 vld:1; +}; + +/* DSCH dsch_vn_n2g_cfg_tbl */ +struct dsch_vn_n2g_cfg_tbl { + u32 grpid:8; + u32 rsv:23; + u32 vld:1; +}; + +/* DSCH dsch_vn_tc_qlist_tbl */ +struct dsch_vn_tc_q_list_tbl { + u32 nxt:11; + u32 reserve:18; + u32 regi:1; + u32 fly:1; + u32 vld:1; +}; + +/* DSCH dsch_vn_g2p_cfg_tbl */ +struct dsch_vn_g2p_cfg_tbl { + u32 port:3; + u32 rsv:28; + u32 vld:1; +}; + +struct dsch_vn_quanta { + u32 h_qua:16; + u32 e_qua:16; +}; + +/* ---------- DVN ---------- */ +/* DVN dvn_queue_table */ +#define NBL_DVN_QUEUE_TABLE_ARR(i) \ + (NBL_DP_DVN_BASE + 0x00020000 + (i) * sizeof(struct dvn_queue_table)) +#define NBL_DVN_QUEUE_CXT_TABLE_ARR(i) \ + (NBL_DP_DVN_BASE + 0x00030000 + (i) * sizeof(struct dvn_queue_context)) +/* DVN dvn_queue_reset */ +#define NBL_DVN_QUEUE_RESET_REG (NBL_DP_DVN_BASE + 0x00000400) +/* DVN dvn_queue_reset_done */ +#define NBL_DVN_QUEUE_RESET_DONE_REG (NBL_DP_DVN_BASE + 0x00000404) +#define NBL_DVN_ECPU_QUEUE_NUM (NBL_DP_DVN_BASE + 0x0000041C) +#define NBL_DVN_DESCREQ_NUM_CFG (NBL_DP_DVN_BASE + 0x00000430) +#define NBL_DVN_DESC_WR_MERGE_TIMEOUT (NBL_DP_DVN_BASE + 0x00000480) +#define NBL_DVN_DIF_REQ_RD_RO_FLAG (NBL_DP_DVN_BASE + 0x0000045C) +#define NBL_DVN_INT_STATUS (NBL_DP_DVN_BASE + 0x00000000) +#define NBL_DVN_DESC_DIF_ERR_CNT (NBL_DP_DVN_BASE + 0x0000003C) +#define NBL_DVN_DESC_DIF_ERR_INFO (NBL_DP_DVN_BASE + 0x00000038) +#define NBL_DVN_PKT_DIF_ERR_INFO (NBL_DP_DVN_BASE + 0x00000030) +#define NBL_DVN_PKT_DIF_ERR_CNT (NBL_DP_DVN_BASE + 0x00000034) +#define NBL_DVN_ERR_QUEUE_ID_GET (NBL_DP_DVN_BASE + 0x0000040C) +#define NBL_DVN_BACK_PRESSURE_MASK (NBL_DP_DVN_BASE + 0x00000464) +#define NBL_DVN_DESCRD_L2_UNAVAIL_CNT (NBL_DP_DVN_BASE + 0x00000A1C) +#define NBL_DVN_DESCRD_L2_NOAVAIL_CNT (NBL_DP_DVN_BASE + 0x00000A20) + +#define DEFAULT_DVN_DESCREQ_NUMCFG (0x00080014) +#define DEFAULT_DVN_100G_DESCREQ_NUMCFG (0x00080020) + +#define NBL_DVN_INT_PKT_DIF_ERR (4) +#define DEFAULT_DVN_DESC_WR_MERGE_TIMEOUT_MAX (0x3FF) + +#define NBL_DVN_INT_DESC_DIF_ERR (5) + +struct nbl_dvn_descreq_num_cfg { + u32 avring_cfg_num:1; /* spilit ring descreq_num 0:8,1:16 */ + u32 rsv0:3; + /* packet ring descreq_num 0:8,1:12,2:16;3:20,4:24,5:26;6:32,7:32 */ + u32 packed_l1_num:3; + u32 rsv1:25; +}; + +struct nbl_dvn_desc_wr_merge_timeout { + u32 cfg_cycle:10; + u32 rsv:22; +}; + +struct nbl_dvn_dif_req_rd_ro_flag { + u32 rd_desc_ro_en:1; + u32 rd_data_ro_en:1; + u32 rd_avring_ro_en:1; + u32 rsv:29; +}; + +/* DVN dvn_queue_table */ +struct dvn_queue_table { + u64 dvn_used_baddr; + u64 dvn_avail_baddr; + u64 dvn_queue_baddr; + u32 dvn_queue_size:4; + u32 dvn_queue_type:1; + u32 dvn_queue_en:1; + u32 dvn_extend_header_en:1; + u32 dvn_interleave_seg_disable:1; + u32 dvn_seg_disable:1; + u32 rsv0:23; + u32 rsv1:32; +}; + +/* DVN dvn_queue_context */ +struct dvn_queue_context { + u32 dvn_descrd_num:3; + u32 dvn_firstdescid:16; + u32 dvn_firstdesc:16; + u32 dvn_indirect_len:6; + u64 dvn_indirect_addr:64; + u32 dvn_indirect_next:5; + u32 dvn_l1_ring_read:16; + u32 dvn_avail_ring_read:16; + u32 dvn_ring_wrap_counter:1; + u32 dvn_lso_id:10; + u32 dvn_avail_ring_idx:16; + u32 dvn_used_ring_idx:16; + u32 dvn_indirect_left:1; + u32 dvn_desc_left:1; + u32 dvn_lso_flag:1; + u32 dvn_descrd_disable:1; + u32 dvn_queue_err:1; + u32 dvn_lso_drop:1; + u32 dvn_protected_bit:1; + u64 reserve; +}; + +/* DVN dvn_queue_reset */ +struct nbl_dvn_queue_reset { + u32 dvn_queue_index:11; + u32 vld:1; + u32 rsv:20; +}; + +/* DVN dvn_queue_reset_done */ +struct nbl_dvn_queue_reset_done { + u32 flag:1; + u32 rsv:31; +}; + +/* DVN dvn_desc_dif_err_info */ +struct dvn_desc_dif_err_info { + u32 queue_id:11; + u32 rsv:21; +}; + +struct dvn_pkt_dif_err_info { + u32 queue_id:11; + u32 rsv:21; +}; + +struct dvn_err_queue_id_get { + u32 pkt_flag:1; + u32 desc_flag:1; + u32 rsv:30; +}; + +/* ---------- UVN ---------- */ +/* UVN uvn_queue_table */ +#define NBL_UVN_QUEUE_TABLE_ARR(i) \ + (NBL_DP_UVN_BASE + 0x00010000 + (i) * sizeof(struct uvn_queue_table)) +/* UVN uvn_queue_cxt */ +#define NBL_UVN_QUEUE_CXT_TABLE_ARR(i) \ + (NBL_DP_UVN_BASE + 0x00020000 + (i) * sizeof(struct uvn_queue_cxt)) +/* UVN uvn_desc_cxt */ +#define NBL_UVN_DESC_CXT_TABLE_ARR(i) \ + (NBL_DP_UVN_BASE + 0x00028000 + (i) * sizeof(struct uvn_desc_cxt)) +/* UVN uvn_queue_reset */ +#define NBL_UVN_QUEUE_RESET_REG (NBL_DP_UVN_BASE + 0x00000200) +/* UVN uvn_queue_reset_done */ +#define NBL_UVN_QUEUE_RESET_DONE_REG (NBL_DP_UVN_BASE + 0x00000408) +#define NBL_UVN_STATIS_PKT_DROP(i) \ + (NBL_DP_UVN_BASE + 0x00038000 + (i) * sizeof(u32)) +#define NBL_UVN_INT_STATUS (NBL_DP_UVN_BASE + 0x00000000) +#define NBL_UVN_QUEUE_ERR_INFO (NBL_DP_UVN_BASE + 0x00000034) +#define NBL_UVN_QUEUE_ERR_CNT (NBL_DP_UVN_BASE + 0x00000038) +#define NBL_UVN_DESC_RD_WAIT (NBL_DP_UVN_BASE + 0x0000020C) +#define NBL_UVN_QUEUE_ERR_MASK (NBL_DP_UVN_BASE + 0x00000224) +#define NBL_UVN_ECPU_QUEUE_NUM (NBL_DP_UVN_BASE + 0x0000023C) +#define NBL_UVN_DESC_WR_TIMEOUT (NBL_DP_UVN_BASE + 0x00000214) +#define NBL_UVN_DIF_DELAY_REQ (NBL_DP_UVN_BASE + 0x000010D0) +#define NBL_UVN_DIF_DELAY_TIME (NBL_DP_UVN_BASE + 0x000010D4) +#define NBL_UVN_DIF_DELAY_MAX (NBL_DP_UVN_BASE + 0x000010D8) +#define NBL_UVN_DESC_PRE_DESC_REQ_NULL (NBL_DP_UVN_BASE + 0x000012C8) +#define NBL_UVN_DESC_PRE_DESC_REQ_LACK (NBL_DP_UVN_BASE + 0x000012CC) +#define NBL_UVN_DESC_RD_ENTRY (NBL_DP_UVN_BASE + 0x000012D0) +#define NBL_UVN_DESC_RD_DROP_DESC_LACK (NBL_DP_UVN_BASE + 0x000012E0) +#define NBL_UVN_DIF_REQ_RO_FLAG (NBL_DP_UVN_BASE + 0x00000250) +#define NBL_UVN_DESC_PREFETCH_INIT (NBL_DP_UVN_BASE + 0x00000204) +#define NBL_UVN_DESC_WR_TIMEOUT_4US (0x960) +#define NBL_UVN_DESC_PREFETCH_NUM (4) + +#define NBL_UVN_INT_QUEUE_ERR (5) + +struct uvn_dif_req_ro_flag { + u32 avail_rd:1; + u32 desc_rd:1; + u32 pkt_wr:1; + u32 desc_wr:1; + u32 rsv:28; +}; + +/* UVN uvn_queue_table */ +struct uvn_queue_table { + u64 used_baddr; + u64 avail_baddr; + u64 queue_baddr; + u32 queue_size_mask_pow:4; + u32 queue_type:1; + u32 queue_enable:1; + u32 extend_header_en:1; + u32 guest_csum_en:1; + u32 half_offload_en:1; + u32 rsv0:23; + u32 rsv1:32; +}; + +/* uvn uvn_queue_cxt */ +struct uvn_queue_cxt { + u32 queue_head:16; + u32 wrap_count:1; + u32 queue_err:1; + u32 prefetch_null_cnt:2; + u32 ntf_finish:1; + u32 spnd_flag:1; + u32 reserve0:10; + u32 avail_idx:16; + u32 avail_idx_spnd_flag:1; + u32 reserve1:15; + u32 reserve2[2]; +}; + +/* uvn uvn_queue_reset */ +struct nbl_uvn_queue_reset { + u32 index:11; + u32 rsv0:5; + u32 vld:1; + u32 rsv1:15; +}; + +/* uvn uvn_queue_reset_done */ +struct nbl_uvn_queue_reset_done { + u32 flag:1; + u32 rsv:31; +}; + +/* uvn uvn_desc_cxt */ +struct uvn_desc_cxt { + u32 cache_head:9; + u32 reserve0:7; + u32 cache_tail:9; + u32 reserve1:7; + u32 cache_pref_num_prev:9; + u32 reserve2:7; + u32 cache_pref_num_post:9; + u32 reserve3:7; + u32 cache_head_byte:30; + u32 reserve4:2; + u32 cache_tail_byte:30; + u32 reserve5:2; +}; + +struct uvn_desc_wr_timeout { + u32 num:15; + u32 mask:1; + u32 rsv:16; +}; + +struct uvn_queue_err_info { + u32 queue_id:11; + u32 type:5; + u32 rsv:16; +}; + +struct uvn_queue_err_mask { + u32 rsv0:1; + u32 buffer_len_err:1; + u32 next_err:1; + u32 indirect_err:1; + u32 split_err:1; + u32 dif_err:1; + u32 rsv1:26; +}; + +struct uvn_desc_prefetch_init { + u32 num:8; + u32 rsv1:8; + u32 sel:1; + u32 rsv:15; +}; + +/* -------- USTORE -------- */ +#define NBL_USTORE_PKT_LEN_ADDR (NBL_DP_USTORE_BASE + 0x00000108) +#define NBL_USTORE_PORT_FC_TH_REG_ARR(port_id) \ + (NBL_DP_USTORE_BASE + 0x00000134 + \ + (port_id) * sizeof(struct nbl_ustore_port_fc_th)) +#define NBL_USTORE_COS_FC_TH_REG_ARR(cos_id) \ + (NBL_DP_USTORE_BASE + 0x00000200 + \ + (cos_id) * sizeof(struct nbl_ustore_cos_fc_th)) +#define NBL_USTORE_PORT_DROP_TH_REG_ARR(port_id) \ + (NBL_DP_USTORE_BASE + 0x00000150 + \ + (port_id) * sizeof(struct nbl_ustore_port_drop_th)) +#define NBL_USTORE_BUF_TOTAL_DROP_PKT (NBL_DP_USTORE_BASE + 0x000010A8) +#define NBL_USTORE_BUF_TOTAL_TRUN_PKT (NBL_DP_USTORE_BASE + 0x000010AC) +#define NBL_USTORE_BUF_PORT_DROP_PKT(eth_id) \ + (NBL_DP_USTORE_BASE + 0x00002500 + (eth_id) * sizeof(u32)) +#define NBL_USTORE_BUF_PORT_TRUN_PKT(eth_id) \ + (NBL_DP_USTORE_BASE + 0x00002540 + (eth_id) * sizeof(u32)) + +#define NBL_USTORE_SIGNLE_ETH_DROP_TH 0xC80 +#define NBL_USTORE_DUAL_ETH_DROP_TH 0x640 +#define NBL_USTORE_QUAD_ETH_DROP_TH 0x320 + +/* USTORE pkt_len */ +struct ustore_pkt_len { + u32 min:7; + u32 rsv:8; + u32 min_chk_en:1; + u32 max:14; + u32 rsv2:1; + u32 max_chk_len:1; +}; + +/* USTORE port_fc_th */ +struct nbl_ustore_port_fc_th { + u32 xoff_th:12; + u32 rsv1:4; + u32 xon_th:12; + u32 rsv2:2; + u32 fc_set:1; + u32 fc_en:1; +}; + +/* USTORE cos_fc_th */ +struct nbl_ustore_cos_fc_th { + u32 xoff_th:12; + u32 rsv1:4; + u32 xon_th:12; + u32 rsv2:2; + u32 fc_set:1; + u32 fc_en:1; +}; + +#define NBL_MAX_USTORE_COS_FC_TH (4080) + +/* USTORE port_drop_th */ +struct nbl_ustore_port_drop_th { + u32 disc_th:12; + u32 rsv:19; + u32 en:1; +}; + +/* ---------- UL4S ---------- */ +#define NBL_UL4S_SCH_PAD_ADDR (NBL_DP_UL4S_BASE + 0x000006c4) + +/* UL4S ul4s_sch_pad */ +struct ul4s_sch_pad { + u32 en:1; + u32 clr:1; + u32 rsv:30; +}; + +/* --------- DSTAT --------- */ +#define NBL_DSTAT_VSI_STAT(vsi_id) \ + (NBL_DP_DSTAT_BASE + 0x00008000 + \ + (vsi_id) * sizeof(struct nbl_dstat_vsi_stat)) + +struct nbl_dstat_vsi_stat { + u32 fwd_byte_cnt_low; + u32 fwd_byte_cnt_high; + u32 fwd_pkt_cnt_low; + u32 fwd_pkt_cnt_high; +}; + +/* --------- USTAT --------- */ +#define NBL_USTAT_VSI_STAT(vsi_id) \ + (NBL_DP_USTAT_BASE + 0x00008000 + \ + (vsi_id) * sizeof(struct nbl_ustat_vsi_stat)) + +struct nbl_ustat_vsi_stat { + u32 fwd_byte_cnt_low; + u32 fwd_byte_cnt_high; + u32 fwd_pkt_cnt_low; + u32 fwd_pkt_cnt_high; +}; + +/* ---------- IPRO ---------- */ +/* ipro module related macros */ +#define NBL_IPRO_MODULE (0xB04000) +/* ipro queue tbl */ +#define NBL_IPRO_QUEUE_TBL(i) \ + (NBL_IPRO_MODULE + 0x00004000 + (i) * sizeof(struct nbl_ipro_queue_tbl)) +#define NBL_IPRO_UP_SPORT_TABLE(i) \ + (NBL_IPRO_MODULE + 0x00007000 + \ + (i) * sizeof(struct nbl_ipro_upsport_tbl)) +#define NBL_IPRO_DN_SRC_PORT_TABLE(i) \ + (NBL_PPE_IPRO_BASE + 0x00008000 + \ + (i) * sizeof(struct nbl_ipro_dn_src_port_tbl)) + +enum nbl_fwd_type_e { + NBL_FWD_TYPE_NORMAL = 0, + NBL_FWD_TYPE_CPU_ASSIGNED = 1, + NBL_FWD_TYPE_UPCALL = 2, + NBL_FWD_TYPE_SRC_MIRROR = 3, + NBL_FWD_TYPE_OTHER_MIRROR = 4, + NBL_FWD_TYPE_MNG = 5, + NBL_FWD_TYPE_GLB_LB = 6, + NBL_FWD_TYPE_DROP = 7, + NBL_FWD_TYPE_MAX = 8, +}; + +/* IPRO dn_src_port_tbl */ +struct nbl_ipro_dn_src_port_tbl { + u32 entry_vld:1; + u32 mirror_en:1; + u32 mirror_pr:2; + u32 mirror_id:4; + u32 vlan_layer_num_1:2; + u32 hw_flow:1; + u32 mtu_sel:4; + u32 addr_check_en:1; + u32 smac_low:16; + u32 smac_high; + u32 dqueue:11; + u32 dqueue_en:1; + u32 dqueue_pri:2; + u32 set_dport_pri:2; + union nbl_action_data set_dport; + u32 set_dport_en:1; + u32 proc_done:1; + u32 not_used_1:6; + u32 rsv:24; +}; + +/* IPRO up sport tab */ +struct nbl_ipro_upsport_tbl { + u32 entry_vld:1; + u32 vlan_layer_num_0:2; + u32 vlan_layer_num_1:2; + u32 lag_vld:1; + u32 lag_id:2; + u32 hw_flow:1; + u32 mirror_en:1; + u32 mirror_pr:2; + u32 mirror_id:4; + u32 dqueue_pri:2; + u32 set_dport_pri:2; + u32 dqueue:11; + u32 dqueue_en:1; + union nbl_action_data set_dport; + u32 set_dport_en:1; + u32 proc_done:1; + u32 car_en:1; + u32 car_pr:2; + u32 car_id:10; + u32 rsv:1; +}; + +struct nbl_ipro_mtu_sel { + u32 mtu_1:16; /* [15:0] Default:0x0 RW */ + u32 mtu_0:16; /* [31:16] Default:0x0 RW */ +}; + +/* ---------- EPRO ---------- */ +#define NBL_EPRO_INT_STATUS (NBL_PPE_EPRO_BASE + 0x00000000) +#define NBL_EPRO_INT_MASK (NBL_PPE_EPRO_BASE + 0x00000004) +#define NBL_EPRO_RSS_KEY_REG (NBL_PPE_EPRO_BASE + 0x00000400) +#define NBL_EPRO_MIRROR_ACT_PRI_REG (NBL_PPE_EPRO_BASE + 0x00000234) +#define NBL_EPRO_ACTION_FILTER_TABLE(i) \ + (NBL_PPE_EPRO_BASE + 0x00001900 + \ + sizeof(struct nbl_epro_action_filter_tbl) * (i)) +/* epro epro_ept table */ +#define NBL_EPRO_EPT_TABLE(i) \ + (NBL_PPE_EPRO_BASE + 0x00001800 + (i) * sizeof(struct nbl_epro_ept_tbl)) +/* epro epro_vpt table */ +#define NBL_EPRO_VPT_TABLE(i) \ + (NBL_PPE_EPRO_BASE + 0x00004000 + (i) * sizeof(struct nbl_epro_vpt_tbl)) +/* epro epro_rss_pt table */ +#define NBL_EPRO_RSS_PT_TABLE(i) \ + (NBL_PPE_EPRO_BASE + 0x00002000 + \ + (i) * sizeof(struct nbl_epro_rss_pt_tbl)) +/* epro epro_rss_ret table */ +#define NBL_EPRO_RSS_RET_TABLE(i) \ + (NBL_PPE_EPRO_BASE + 0x00008000 + \ + (i) * sizeof(struct nbl_epro_rss_ret_tbl)) +/* epro epro_sch_cos_map table */ +#define NBL_EPRO_SCH_COS_MAP_TABLE(i, j) \ + (NBL_PPE_EPRO_BASE + 0x00000640 + ((i) * 0x20) + \ + (j) * sizeof(struct nbl_epro_cos_map)) +/* epro epro_port_pri_mdf_en */ +#define NBL_EPRO_PORT_PRI_MDF_EN (NBL_PPE_EPRO_BASE + 0x000006E0) +/* epro epro_act_sel_en */ +#define NBL_EPRO_ACT_SEL_EN_REG (NBL_PPE_EPRO_BASE + 0x00000214) +/* epro epro_kgen_ft table */ +#define NBL_EPRO_KGEN_FT_TABLE(i) \ + (NBL_PPE_EPRO_BASE + 0x00001980 + \ + (i) * sizeof(struct nbl_epro_kgen_ft_tbl)) + +struct nbl_epro_int_mask { + u32 fatal_err:1; + u32 fifo_uflw_err:1; + u32 fifo_dflw_err:1; + u32 cif_err:1; + u32 input_err:1; + u32 cfg_err:1; + u32 data_ucor_err:1; + u32 bank_cor_err:1; + u32 rsv2:24; +}; + +struct nbl_epro_rss_key { + u64 key0; + u64 key1; + u64 key2; + u64 key3; + u64 key4; +}; + +struct nbl_epro_mirror_act_pri { + u32 car_idx_pri:2; + u32 dqueue_pri:2; + u32 dport_pri:2; + u32 rsv:26; +}; + +/* EPRO epro_rss_ret table */ +struct nbl_epro_rss_ret_tbl { + u32 dqueue0:11; + u32 vld0:1; + u32 rsv0:4; + u32 dqueue1:11; + u32 vld1:1; + u32 rsv1:4; +}; + +/* EPRO epro_rss_pt table */ +struct nbl_epro_rss_pt_tbl { + u32 entry_size:3; +#define NBL_EPRO_RSS_ENTRY_SIZE_16 (0) +#define NBL_EPRO_RSS_ENTRY_SIZE_32 (1) +#define NBL_EPRO_RSS_ENTRY_SIZE_64 (2) +#define NBL_EPRO_RSS_ENTRY_SIZE_128 (3) +#define NBL_EPRO_RSS_ENTRY_SIZE_256 (4) + u32 offset1:14; + u32 offset1_vld:1; + u32 offset0:14; + u32 offset0_vld:1; + u32 vld:1; + u32 rsv:30; +}; + +/*EPRO sch cos map*/ +struct nbl_epro_cos_map { + u32 pkt_cos:3; + u32 dscp:6; + u32 rsv:23; +}; + +/* EPRO epro_port_pri_mdf_en */ +struct nbl_epro_port_pri_mdf_en_cfg { + u32 eth0:1; + u32 eth1:1; + u32 eth2:1; + u32 eth3:1; + u32 loop:1; + u32 rsv:27; +}; + +enum nbl_md_action_id_e { + NBL_MD_ACTION_NONE = 0, + NBL_MD_ACTION_CLEAR_FLAG = 1, + NBL_MD_ACTION_SET_FLAG = NBL_MD_ACTION_CLEAR_FLAG, + NBL_MD_ACTION_SET_FWD = NBL_MD_ACTION_CLEAR_FLAG, + NBL_MD_ACTION_FLOWID0 = 2, + NBL_MD_ACTION_FLOWID1 = 3, + NBL_MD_ACTION_RSSIDX = 4, + NBL_MD_ACTION_PORT_CARIDX = 5, + NBL_MD_ACTION_FLOW_CARIDX = 6, + NBL_MD_ACTION_TABLE_INDEX = 7, + NBL_MD_ACTION_MIRRIDX = 8, + NBL_MD_ACTION_DPORT = 9, + NBL_MD_ACTION_SET_DPORT = NBL_MD_ACTION_DPORT, + NBL_MD_ACTION_DQUEUE = 10, + NBL_MD_ACTION_MCIDX = 13, + NBL_MD_ACTION_VNI0 = 14, + NBL_MD_ACTION_VNI1 = 15, + NBL_MD_ACTION_STAT_IDX = 16, + NBL_MD_ACTION_PRBAC_IDX = 17, + NBL_MD_ACTION_L4S_IDX = NBL_MD_ACTION_PRBAC_IDX, + NBL_MD_ACTION_DP_HASH0 = 19, + NBL_MD_ACTION_DP_HASH1 = 20, + NBL_MD_ACTION_MDF_PRI = 21, + + NBL_MD_ACTION_MDF_V4_SIP = 32, + NBL_MD_ACTION_MDF_V4_DIP = 33, + NBL_MD_ACTION_MDF_V6_SIP = 34, + NBL_MD_ACTION_MDF_V6_DIP = 35, + NBL_MD_ACTION_MDF_DPORT = 36, + NBL_MD_ACTION_MDF_SPORT = 37, + NBL_MD_ACTION_MDF_DMAC = 38, + NBL_MD_ACTION_MDF_SMAC = 39, + NBL_MD_ACTION_MDF_V4_DSCP_ECN = 40, + NBL_MD_ACTION_MDF_V6_DSCP_ECN = 41, + NBL_MD_ACTION_MDF_V4_TTL = 42, + NBL_MD_ACTION_MDF_V6_HOPLIMIT = 43, + NBL_MD_ACTION_DEL_O_VLAN = 44, + NBL_MD_ACTION_DEL_I_VLAN = 45, + NBL_MD_ACTION_MDF_O_VLAN = 46, + NBL_MD_ACTION_MDF_I_VLAN = 47, + NBL_MD_ACTION_ADD_O_VLAN = 48, + NBL_MD_ACTION_ADD_I_VLAN = 49, + NBL_MD_ACTION_ENCAP_TNL = 50, + NBL_MD_ACTION_DECAP_TNL = 51, + NBL_MD_ACTION_MDF_TNL_SPORT = 52, +}; + +/* EPRO action filter table */ +struct nbl_epro_action_filter_tbl { + u64 filter_mask; +}; + +/* EPRO epr_ept table */ +struct nbl_epro_ept_tbl { + u32 cvlan:16; + u32 svlan:16; + u32 fwd:1; +#define NBL_EPRO_FWD_TYPE_DROP (0) +#define NBL_EPRO_FWD_TYPE_NORMAL (1) + u32 mirror_en:1; + u32 mirror_id:4; + u32 pop_i_vlan:1; + u32 pop_o_vlan:1; + u32 push_i_vlan:1; + u32 push_o_vlan:1; + u32 replace_i_vlan:1; + u32 replace_o_vlan:1; + u32 lag_alg_sel:2; +#define NBL_EPRO_LAG_ALG_L2_HASH (0) +#define NBL_EPRO_LAG_ALG_L23_HASH (1) +#define NBL_EPRO_LAG_ALG_LINUX_L34_HASH (2) +#define NBL_EPRO_LAG_ALG_DPDK_L34_HASH (3) + u32 lag_port_btm:4; + u32 lag_l2_protect_en:1; + u32 pfc_sch_cos_default:3; + u32 pfc_mode:1; + u32 vld:1; + u32 rsv:8; +}; + +/* EPRO epro_vpt table */ +struct nbl_epro_vpt_tbl { + u32 cvlan:16; + u32 svlan:16; + u32 fwd:1; +#define NBL_EPRO_FWD_TYPE_DROP (0) +#define NBL_EPRO_FWD_TYPE_NORMAL (1) + u32 mirror_en:1; + u32 mirror_id:4; + u32 car_en:1; + u32 car_id:10; + u32 pop_i_vlan:1; + u32 pop_o_vlan:1; + u32 push_i_vlan:1; + u32 push_o_vlan:1; + u32 replace_i_vlan:1; + u32 replace_o_vlan:1; + u32 rss_alg_sel:1; +#define NBL_EPRO_RSS_ALG_TOEPLITZ_HASH (0) +#define NBL_EPRO_RSS_ALG_CRC32 (1) + u32 rss_key_type_ipv4:1; +#define NBL_EPRO_RSS_KEY_TYPE_IPV4_L3 (0) +#define NBL_EPRO_RSS_KEY_TYPE_IPV4_L4 (1) + u32 rss_key_type_ipv6:1; +#define NBL_EPRO_RSS_KEY_TYPE_IPV6_L3 (0) +#define NBL_EPRO_RSS_KEY_TYPE_IPV6_L4 (1) + u32 vld:1; + u32 rsv:5; +}; + +/* UPA upa_pri_sel_conf */ +#define NBL_UPA_PRI_SEL_CONF_TABLE(id) \ + (NBL_DP_UPA_BASE + 0x00000230 + \ + ((id) * sizeof(struct nbl_upa_pri_sel_conf))) +#define NBL_UPA_PRI_CONF_TABLE(id) \ + (NBL_DP_UPA_BASE + 0x00002000 + \ + ((id) * sizeof(struct nbl_upa_pri_conf))) + +/* UPA pri_sel_conf */ +struct nbl_upa_pri_sel_conf { + u32 pri_sel:5; + u32 pri_default:3; + u32 pri_disen:1; + u32 rsv:23; +}; + +/* UPA pri_conf_table */ +struct nbl_upa_pri_conf { + u32 pri0:4; + u32 pri1:4; + u32 pri2:4; + u32 pri3:4; + u32 pri4:4; + u32 pri5:4; + u32 pri6:4; + u32 pri7:4; +}; + +#define NBL_DQM_RXMAC_TX_PORT_BP_EN (NBL_DP_DQM_BASE + 0x00000660) +#define NBL_DQM_RXMAC_TX_COS_BP_EN (NBL_DP_DQM_BASE + 0x00000664) +#define NBL_DQM_RXMAC_RX_PORT_BP_EN (NBL_DP_DQM_BASE + 0x00000670) +#define NBL_DQM_RX_PORT_BP_EN (NBL_DP_DQM_BASE + 0x00000610) +#define NBL_DQM_RX_COS_BP_EN (NBL_DP_DQM_BASE + 0x00000614) + +/* DQM rxmac_tx_port_bp_en */ +struct nbl_dqm_rxmac_tx_port_bp_en_cfg { + u32 eth0:1; + u32 eth1:1; + u32 eth2:1; + u32 eth3:1; + u32 rsv:28; +}; + +/* DQM rxmac_tx_cos_bp_en */ +struct nbl_dqm_rxmac_tx_cos_bp_en_cfg { + u32 eth0:8; + u32 eth1:8; + u32 eth2:8; + u32 eth3:8; +}; + +#define NBL_UQM_QUE_TYPE (NBL_DP_UQM_BASE + 0x0000013c) +#define NBL_UQM_RX_COS_BP_EN (NBL_DP_UQM_BASE + 0x00000614) +#define NBL_UQM_TX_COS_BP_EN (NBL_DP_UQM_BASE + 0x00000604) + +#define NBL_UQM_DROP_PKT_CNT (NBL_DP_UQM_BASE + 0x000009C0) +#define NBL_UQM_DROP_PKT_SLICE_CNT (NBL_DP_UQM_BASE + 0x000009C4) +#define NBL_UQM_DROP_PKT_LEN_ADD_CNT (NBL_DP_UQM_BASE + 0x000009C8) +#define NBL_UQM_DROP_HEAD_PNTR_ADD_CNT (NBL_DP_UQM_BASE + 0x000009CC) +#define NBL_UQM_DROP_WEIGHT_ADD_CNT (NBL_DP_UQM_BASE + 0x000009D0) +#define NBL_UQM_PORT_DROP_PKT_CNT (NBL_DP_UQM_BASE + 0x000009D4) +#define NBL_UQM_PORT_DROP_PKT_SLICE_CNT (NBL_DP_UQM_BASE + 0x000009F4) +#define NBL_UQM_PORT_DROP_PKT_LEN_ADD_CNT (NBL_DP_UQM_BASE + 0x00000A14) +#define NBL_UQM_PORT_DROP_HEAD_PNTR_ADD_CNT (NBL_DP_UQM_BASE + 0x00000A34) +#define NBL_UQM_PORT_DROP_WEIGHT_ADD_CNT (NBL_DP_UQM_BASE + 0x00000A54) +#define NBL_UQM_FWD_DROP_CNT (NBL_DP_UQM_BASE + 0x00000A80) +#define NBL_UQM_DPORT_DROP_CNT (NBL_DP_UQM_BASE + 0x00000B74) + +#define NBL_UQM_PORT_DROP_DEPTH 6 +#define NBL_UQM_DPORT_DROP_DEPTH 16 + +struct nbl_uqm_que_type { + u32 bp_drop:1; + u32 rsv:31; +}; + +/* UQM rx_cos_bp_en */ +struct nbl_uqm_rx_cos_bp_en_cfg { + u32 vld_l; + u32 vld_h:16; +}; + +/* UQM rx_port_bp_en */ +struct nbl_uqm_rx_port_bp_en_cfg { + u32 l4s_h:1; + u32 l4s_e:1; + u32 rdma_h:1; + u32 rdma_e:1; + u32 emp:1; + u32 loopback:1; + u32 rsv:26; +}; + +/* UQM tx_cos_bp_en */ +struct nbl_uqm_tx_cos_bp_en_cfg { + u32 vld_l; + u32 vld_h:8; +}; + +#pragma pack() + +/* ---------- TOP ---------- */ +/* lb_top_ctrl_crg_cfg crg_cfg */ +#define NBL_TOP_CTRL_MODULE (0x01300000) +#define NBL_TOP_CTRL_INT_STATUS (NBL_TOP_CTRL_MODULE + 0X0000) +#define NBL_TOP_CTRL_INT_MASK (NBL_TOP_CTRL_MODULE + 0X0004) +#define NBL_TOP_CTRL_LB_CLK (NBL_TOP_CTRL_MODULE + 0X0100) +#define NBL_TOP_CTRL_LB_RST (NBL_TOP_CTRL_MODULE + 0X0104) +#define NBL_TOP_CTRL_TVSENSOR0 (NBL_TOP_CTRL_MODULE + 0X0254) +#define NBL_TOP_CTRL_SOFT_DEF0 (NBL_TOP_CTRL_MODULE + 0x0430) +#define NBL_TOP_CTRL_SOFT_DEF1 (NBL_TOP_CTRL_MODULE + 0x0434) +#define NBL_TOP_CTRL_SOFT_DEF2 (NBL_TOP_CTRL_MODULE + 0x0438) +#define NBL_TOP_CTRL_SOFT_DEF3 (NBL_TOP_CTRL_MODULE + 0x043c) +#define NBL_TOP_CTRL_SOFT_DEF4 (NBL_TOP_CTRL_MODULE + 0x0440) +#define NBL_TOP_CTRL_SOFT_DEF5 (NBL_TOP_CTRL_MODULE + 0x0444) +#define NBL_TOP_CTRL_VERSION_INFO (NBL_TOP_CTRL_MODULE + 0X0900) +#define NBL_TOP_CTRL_VERSION_DATE (NBL_TOP_CTRL_MODULE + 0X0904) + +#define NBL_FW_HEARTBEAT_PONG NBL_TOP_CTRL_SOFT_DEF1 + +#define NBL_TOP_CTRL_RDMA_LB_RST BIT(10) +#define NBL_TOP_CTRL_RDMA_LB_CLK BIT(10) + +/* temperature threshold1 */ +#define NBL_LEONIS_TEMP_MAX (105) +/* temperature threshold2 */ +#define NBL_LEONIS_TEMP_CRIT (115) + +#define NBL_ACT_DATA_BITS (16) + +#define NBL_CMDQ_DIF_MODE_VALUE (2) +#define NBL_CMDQ_DELAY_200US (200) +#define NBL_CMDQ_DELAY_300US (300) +#define NBL_CMDQ_RESET_MAX_WAIT (30) +#define NBL_CMD_NOTIFY_ADDR (0x00001000) +#define NBL_ACL_RD_RETRY (50000) +#define NBL_ACL_RD_WAIT_100US (100) +#define NBL_ACL_RD_WAIT_200US (200) +#define NBL_ACL_CPU_WRITE (0) +#define NBL_ACL_CPU_READ (1) + +/* the capacity of storing acl-items in all tcams */ +#define NBL_ACL_ITEM_CAP (1536) +#define NBL_ACL_KEY_WIDTH (120) +#define NBL_ACL_ITEM6_CAP (512) +#define NBL_ACL_KEY6_WIDTH (240) +#define NBL_ACL_TCAM_DEPTH (512) +#define NBL_ACL_S1_PROFILE_ID (0) +#define NBL_ACL_S2_PROFILE_ID (1) +#define NBL_ACL_TCAM_CNT (16) +#define NBL_ACL_TCAM_HALF (8) +#define NBL_ACL_TCAM_DEPTH (512) +#define NBL_ACL_TCAM_BITS (40) +#define NBL_ACL_HALF_TCAMS_BITS (320) +#define NBL_ACL_HALF_TCAMS_BYTES (40) +#define NBL_ACL_ALL_TCAMS_BITS (640) +#define NBL_ACL_ALL_TCAMS_BYTES (80) +#define NBL_ACL_ACT_RAM_CNT (4) + +#define NBL_BYTES_IN_REG (4) + +#define NBL_FEM_INIT_START_KERN (0xFE) +#define NBL_FEM_INIT_START_VALUE (0x3E) +#define NBL_PED_VSI_TYPE_ETH_BASE (1027) +#define NBL_DPED_VLAN_TYPE_PORT_NUM (1031) +#define NBL_CHAN_REG_MAX_LEN (32) +#define NBL_EPRO_RSS_KEY_32 (0x6d5a6d5a) + +#define NBL_SHAPING_GRP_TIMMING_ADD_ADDR (0x504400) +#define NBL_SHAPING_GRP_ADDR (0x504800) +#define NBL_SHAPING_GRP_DWLEN (4) +#define NBL_SHAPING_GRP_REG(r) \ + (NBL_SHAPING_GRP_ADDR + (NBL_SHAPING_GRP_DWLEN * 4) * (r)) +#define NBL_DSCH_VN_SHA2GRP_MAP_TBL_ADDR (0x47c000) +#define NBL_DSCH_VN_SHA2GRP_MAP_TBL_DWLEN (1) +#define NBL_DSCH_VN_SHA2GRP_MAP_TBL_REG(r) \ + (NBL_DSCH_VN_SHA2GRP_MAP_TBL_ADDR + \ + (NBL_DSCH_VN_SHA2GRP_MAP_TBL_DWLEN * 4) * (r)) +#define NBL_DSCH_VN_GRP2SHA_MAP_TBL_ADDR (0x480000) +#define NBL_DSCH_VN_GRP2SHA_MAP_TBL_DWLEN (1) +#define NBL_DSCH_VN_GRP2SHA_MAP_TBL_REG(r) \ + (NBL_DSCH_VN_GRP2SHA_MAP_TBL_ADDR + \ + (NBL_DSCH_VN_GRP2SHA_MAP_TBL_DWLEN * 4) * (r)) +#define NBL_SHAPING_DPORT_TIMMING_ADD_ADDR (0x504504) +#define NBL_SHAPING_DPORT_ADDR (0x504700) +#define NBL_SHAPING_DPORT_DWLEN (4) +#define NBL_SHAPING_DPORT_REG(r) \ + (NBL_SHAPING_DPORT_ADDR + (NBL_SHAPING_DPORT_DWLEN * 4) * (r)) +#define NBL_SHAPING_DVN_DPORT_ADDR (0x504750) +#define NBL_SHAPING_DVN_DPORT_DWLEN (4) +#define NBL_SHAPING_DVN_DPORT_REG(r) \ + (NBL_SHAPING_DVN_DPORT_ADDR + (NBL_SHAPING_DVN_DPORT_DWLEN * 4) * (r)) +#define NBL_SHAPING_RDMA_DPORT_ADDR (0x5047a0) +#define NBL_SHAPING_RDMA_DPORT_DWLEN (4) +#define NBL_SHAPING_RDMA_DPORT_REG(r) \ + (NBL_SHAPING_RDMA_DPORT_ADDR + (NBL_SHAPING_RDMA_DPORT_DWLEN * 4) * (r)) +#define NBL_DSCH_PSHA_EN_ADDR (0x404314) +#define NBL_SHAPING_NET_ADDR (0x505800) +#define NBL_SHAPING_NET_DWLEN (4) +#define NBL_SHAPING_NET_REG(r) \ + (NBL_SHAPING_NET_ADDR + (NBL_SHAPING_NET_DWLEN * 4) * (r)) +#define NBL_DSCH_VN_SHA2NET_MAP_TBL_ADDR (0x474000) +#define NBL_DSCH_VN_SHA2NET_MAP_TBL_DWLEN (1) +#define NBL_DSCH_VN_SHA2NET_MAP_TBL_REG(r) \ + (NBL_DSCH_VN_SHA2NET_MAP_TBL_ADDR + \ + (NBL_DSCH_VN_SHA2NET_MAP_TBL_DWLEN * 4) * (r)) +#define NBL_DSCH_VN_NET2SHA_MAP_TBL_ADDR (0x478000) +#define NBL_DSCH_VN_NET2SHA_MAP_TBL_DWLEN (1) +#define NBL_DSCH_VN_NET2SHA_MAP_TBL_REG(r) \ + (NBL_DSCH_VN_NET2SHA_MAP_TBL_ADDR + \ + (NBL_DSCH_VN_NET2SHA_MAP_TBL_DWLEN * 4) * (r)) + +#define NBL_DSCH_RDMA_SHA2NET_MAP_TBL_ADDR (0x49c000) +#define NBL_DSCH_RDMA_SHA2NET_MAP_TBL_DWLEN (1) +#define NBL_DSCH_RDMA_SHA2NET_MAP_TBL_REG(r) \ + (NBL_DSCH_RDMA_SHA2NET_MAP_TBL_ADDR + \ + (NBL_DSCH_RDMA_SHA2NET_MAP_TBL_DWLEN * 4) * (r)) +#define NBL_DSCH_RDMA_NET2SHA_MAP_TBL_ADDR (0x494000) +#define NBL_DSCH_RDMA_NET2SHA_MAP_TBL_DWLEN (1) +#define NBL_DSCH_RDMA_NET2SHA_MAP_TBL_REG(r) \ + (NBL_DSCH_RDMA_NET2SHA_MAP_TBL_ADDR + \ + (NBL_DSCH_RDMA_NET2SHA_MAP_TBL_DWLEN * 4) * (r)) + +/* Mailbox bar hw register offset begin */ +#define NBL_FW_HEARTBEAT_PING 0x84 +#define NBL_FW_BOARD_CONFIG 0x200 +#define NBL_FW_BOARD_DW3_OFFSET (NBL_FW_BOARD_CONFIG + 12) +#define NBL_FW_BOARD_DW6_OFFSET (NBL_FW_BOARD_CONFIG + 24) +#define NBL_ETH_REP_INFO_BASE (1024) + +/* Mailbox bar hw register offset end */ + +#define NBL_ACL_ACTION_RAM_TBL(r, i) \ + (NBL_ACL_BASE + 0x00002000 + 0x2000 * (r) + \ + (NBL_ACL_ACTION_RAM0_DWLEN * 4 * (i))) +#define NBL_DPED_MIR_CMD_0_TABLE(t) \ + (NBL_DPED_MIR_CMD_00_ADDR + (NBL_DPED_MIR_CMD_00_DWLEN * 2 * (t))) +#define NBL_SET_DPORT(upcall_flag, nxtstg_sel, port_type, port_id) \ + ((upcall_flag) << 14 | (nxtstg_sel) << 12 | (port_type) << 10 | \ + (port_id)) + +union nbl_fw_board_cfg_dw3 { + struct board_cfg_dw3 { + u32 port_type:1; + u32 port_num:7; + u32 port_speed:2; + u32 gpio_type:3; + u32 p4_version:1; /* 0: low version; 1: high version */ + u32 rsv:18; + } __packed info; + u32 data; +}; + +union nbl_fw_board_cfg_dw6 { + struct board_cfg_dw6 { + u8 lane_bitmap; + u8 eth_bitmap; + u16 rsv; + } __packed info; + u32 data; +}; + +#define NBL_LEONIS_QUIRKS_OFFSET (0x00000140) +#define NBL_LEONIS_ILLEGAL_REG_VALUE (0xDEADBEEF) + #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c new file mode 100644 index 000000000000..6486fc74ab31 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.c @@ -0,0 +1,3863 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include "nbl_hw_reg.h" +#include "nbl_hw_leonis.h" +#include "nbl_hw_leonis_regs.h" + +#define NBL_SEC_BLOCK_SIZE (0x100) +#define NBL_SEC000_SIZE (1) +#define NBL_SEC000_ADDR (0x114150) +#define NBL_SEC001_SIZE (1) +#define NBL_SEC001_ADDR (0x15c190) +#define NBL_SEC002_SIZE (1) +#define NBL_SEC002_ADDR (0x10417c) +#define NBL_SEC003_SIZE (1) +#define NBL_SEC003_ADDR (0x714154) +#define NBL_SEC004_SIZE (1) +#define NBL_SEC004_ADDR (0x75c190) +#define NBL_SEC005_SIZE (1) +#define NBL_SEC005_ADDR (0x70417c) +#define NBL_SEC006_SIZE (512) +#define NBL_SEC006_ADDR (0x8f000) +#define NBL_SEC006_REGI(i) (0x8f000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC007_SIZE (256) +#define NBL_SEC007_ADDR (0x8f800) +#define NBL_SEC007_REGI(i) (0x8f800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC008_SIZE (1024) +#define NBL_SEC008_ADDR (0x90000) +#define NBL_SEC008_REGI(i) (0x90000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC009_SIZE (2048) +#define NBL_SEC009_ADDR (0x94000) +#define NBL_SEC009_REGI(i) (0x94000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC010_SIZE (256) +#define NBL_SEC010_ADDR (0x96000) +#define NBL_SEC010_REGI(i) (0x96000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC011_SIZE (1024) +#define NBL_SEC011_ADDR (0x91000) +#define NBL_SEC011_REGI(i) (0x91000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC012_SIZE (128) +#define NBL_SEC012_ADDR (0x92000) +#define NBL_SEC012_REGI(i) (0x92000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC013_SIZE (64) +#define NBL_SEC013_ADDR (0x92200) +#define NBL_SEC013_REGI(i) (0x92200 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC014_SIZE (64) +#define NBL_SEC014_ADDR (0x92300) +#define NBL_SEC014_REGI(i) (0x92300 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC015_SIZE (1) +#define NBL_SEC015_ADDR (0x8c214) +#define NBL_SEC016_SIZE (1) +#define NBL_SEC016_ADDR (0x8c220) +#define NBL_SEC017_SIZE (1) +#define NBL_SEC017_ADDR (0x8c224) +#define NBL_SEC018_SIZE (1) +#define NBL_SEC018_ADDR (0x8c228) +#define NBL_SEC019_SIZE (1) +#define NBL_SEC019_ADDR (0x8c22c) +#define NBL_SEC020_SIZE (1) +#define NBL_SEC020_ADDR (0x8c1f0) +#define NBL_SEC021_SIZE (1) +#define NBL_SEC021_ADDR (0x8c1f8) +#define NBL_SEC022_SIZE (256) +#define NBL_SEC022_ADDR (0x85f000) +#define NBL_SEC022_REGI(i) (0x85f000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC023_SIZE (128) +#define NBL_SEC023_ADDR (0x85f800) +#define NBL_SEC023_REGI(i) (0x85f800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC024_SIZE (512) +#define NBL_SEC024_ADDR (0x860000) +#define NBL_SEC024_REGI(i) (0x860000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC025_SIZE (1024) +#define NBL_SEC025_ADDR (0x864000) +#define NBL_SEC025_REGI(i) (0x864000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC026_SIZE (256) +#define NBL_SEC026_ADDR (0x866000) +#define NBL_SEC026_REGI(i) (0x866000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC027_SIZE (512) +#define NBL_SEC027_ADDR (0x861000) +#define NBL_SEC027_REGI(i) (0x861000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC028_SIZE (64) +#define NBL_SEC028_ADDR (0x862000) +#define NBL_SEC028_REGI(i) (0x862000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC029_SIZE (32) +#define NBL_SEC029_ADDR (0x862200) +#define NBL_SEC029_REGI(i) (0x862200 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC030_SIZE (32) +#define NBL_SEC030_ADDR (0x862300) +#define NBL_SEC030_REGI(i) (0x862300 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC031_SIZE (1) +#define NBL_SEC031_ADDR (0x85c214) +#define NBL_SEC032_SIZE (1) +#define NBL_SEC032_ADDR (0x85c220) +#define NBL_SEC033_SIZE (1) +#define NBL_SEC033_ADDR (0x85c224) +#define NBL_SEC034_SIZE (1) +#define NBL_SEC034_ADDR (0x85c228) +#define NBL_SEC035_SIZE (1) +#define NBL_SEC035_ADDR (0x85c22c) +#define NBL_SEC036_SIZE (1) +#define NBL_SEC036_ADDR (0xb04200) +#define NBL_SEC037_SIZE (1) +#define NBL_SEC037_ADDR (0xb04230) +#define NBL_SEC038_SIZE (1) +#define NBL_SEC038_ADDR (0xb04234) +#define NBL_SEC039_SIZE (64) +#define NBL_SEC039_ADDR (0xb05800) +#define NBL_SEC039_REGI(i) (0xb05800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC040_SIZE (32) +#define NBL_SEC040_ADDR (0xb05400) +#define NBL_SEC040_REGI(i) (0xb05400 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC041_SIZE (16) +#define NBL_SEC041_ADDR (0xb05500) +#define NBL_SEC041_REGI(i) (0xb05500 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC042_SIZE (1) +#define NBL_SEC042_ADDR (0xb14148) +#define NBL_SEC043_SIZE (1) +#define NBL_SEC043_ADDR (0xb14104) +#define NBL_SEC044_SIZE (1) +#define NBL_SEC044_ADDR (0xb1414c) +#define NBL_SEC045_SIZE (1) +#define NBL_SEC045_ADDR (0xb14150) +#define NBL_SEC046_SIZE (256) +#define NBL_SEC046_ADDR (0xb15000) +#define NBL_SEC046_REGI(i) (0xb15000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC047_SIZE (32) +#define NBL_SEC047_ADDR (0xb15800) +#define NBL_SEC047_REGI(i) (0xb15800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC048_SIZE (1) +#define NBL_SEC048_ADDR (0xb24148) +#define NBL_SEC049_SIZE (1) +#define NBL_SEC049_ADDR (0xb24104) +#define NBL_SEC050_SIZE (1) +#define NBL_SEC050_ADDR (0xb2414c) +#define NBL_SEC051_SIZE (1) +#define NBL_SEC051_ADDR (0xb24150) +#define NBL_SEC052_SIZE (256) +#define NBL_SEC052_ADDR (0xb25000) +#define NBL_SEC052_REGI(i) (0xb25000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC053_SIZE (32) +#define NBL_SEC053_ADDR (0xb25800) +#define NBL_SEC053_REGI(i) (0xb25800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC054_SIZE (1) +#define NBL_SEC054_ADDR (0xb34148) +#define NBL_SEC055_SIZE (1) +#define NBL_SEC055_ADDR (0xb34104) +#define NBL_SEC056_SIZE (1) +#define NBL_SEC056_ADDR (0xb3414c) +#define NBL_SEC057_SIZE (1) +#define NBL_SEC057_ADDR (0xb34150) +#define NBL_SEC058_SIZE (256) +#define NBL_SEC058_ADDR (0xb35000) +#define NBL_SEC058_REGI(i) (0xb35000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC059_SIZE (32) +#define NBL_SEC059_ADDR (0xb35800) +#define NBL_SEC059_REGI(i) (0xb35800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC060_SIZE (1) +#define NBL_SEC060_ADDR (0xe74630) +#define NBL_SEC061_SIZE (1) +#define NBL_SEC061_ADDR (0xe74634) +#define NBL_SEC062_SIZE (64) +#define NBL_SEC062_ADDR (0xe75000) +#define NBL_SEC062_REGI(i) (0xe75000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC063_SIZE (32) +#define NBL_SEC063_ADDR (0xe75480) +#define NBL_SEC063_REGI(i) (0xe75480 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC064_SIZE (16) +#define NBL_SEC064_ADDR (0xe75980) +#define NBL_SEC064_REGI(i) (0xe75980 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC065_SIZE (32) +#define NBL_SEC065_ADDR (0x15f000) +#define NBL_SEC065_REGI(i) (0x15f000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC066_SIZE (32) +#define NBL_SEC066_ADDR (0x75f000) +#define NBL_SEC066_REGI(i) (0x75f000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC067_SIZE (1) +#define NBL_SEC067_ADDR (0xb64108) +#define NBL_SEC068_SIZE (1) +#define NBL_SEC068_ADDR (0xb6410c) +#define NBL_SEC069_SIZE (1) +#define NBL_SEC069_ADDR (0xb64140) +#define NBL_SEC070_SIZE (1) +#define NBL_SEC070_ADDR (0xb64144) +#define NBL_SEC071_SIZE (512) +#define NBL_SEC071_ADDR (0xb65000) +#define NBL_SEC071_REGI(i) (0xb65000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC072_SIZE (32) +#define NBL_SEC072_ADDR (0xb65800) +#define NBL_SEC072_REGI(i) (0xb65800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC073_SIZE (1) +#define NBL_SEC073_ADDR (0x8c210) +#define NBL_SEC074_SIZE (1) +#define NBL_SEC074_ADDR (0x85c210) +#define NBL_SEC075_SIZE (4) +#define NBL_SEC075_ADDR (0x8c1b0) +#define NBL_SEC075_REGI(i) (0x8c1b0 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC076_SIZE (4) +#define NBL_SEC076_ADDR (0x8c1c0) +#define NBL_SEC076_REGI(i) (0x8c1c0 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC077_SIZE (4) +#define NBL_SEC077_ADDR (0x85c1b0) +#define NBL_SEC077_REGI(i) (0x85c1b0 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC078_SIZE (1) +#define NBL_SEC078_ADDR (0x85c1ec) +#define NBL_SEC079_SIZE (1) +#define NBL_SEC079_ADDR (0x8c1ec) +#define NBL_SEC080_SIZE (1) +#define NBL_SEC080_ADDR (0xb04440) +#define NBL_SEC081_SIZE (1) +#define NBL_SEC081_ADDR (0xb04448) +#define NBL_SEC082_SIZE (1) +#define NBL_SEC082_ADDR (0xb14450) +#define NBL_SEC083_SIZE (1) +#define NBL_SEC083_ADDR (0xb24450) +#define NBL_SEC084_SIZE (1) +#define NBL_SEC084_ADDR (0xb34450) +#define NBL_SEC085_SIZE (1) +#define NBL_SEC085_ADDR (0xa04188) +#define NBL_SEC086_SIZE (1) +#define NBL_SEC086_ADDR (0xe74218) +#define NBL_SEC087_SIZE (1) +#define NBL_SEC087_ADDR (0xe7421c) +#define NBL_SEC088_SIZE (1) +#define NBL_SEC088_ADDR (0xe74220) +#define NBL_SEC089_SIZE (1) +#define NBL_SEC089_ADDR (0xe74224) +#define NBL_SEC090_SIZE (1) +#define NBL_SEC090_ADDR (0x75c22c) +#define NBL_SEC091_SIZE (1) +#define NBL_SEC091_ADDR (0x75c230) +#define NBL_SEC092_SIZE (1) +#define NBL_SEC092_ADDR (0x75c238) +#define NBL_SEC093_SIZE (1) +#define NBL_SEC093_ADDR (0x75c244) +#define NBL_SEC094_SIZE (1) +#define NBL_SEC094_ADDR (0x75c248) +#define NBL_SEC095_SIZE (1) +#define NBL_SEC095_ADDR (0x75c250) +#define NBL_SEC096_SIZE (1) +#define NBL_SEC096_ADDR (0x15c230) +#define NBL_SEC097_SIZE (1) +#define NBL_SEC097_ADDR (0x15c234) +#define NBL_SEC098_SIZE (1) +#define NBL_SEC098_ADDR (0x15c238) +#define NBL_SEC099_SIZE (1) +#define NBL_SEC099_ADDR (0x15c23c) +#define NBL_SEC100_SIZE (1) +#define NBL_SEC100_ADDR (0x15c244) +#define NBL_SEC101_SIZE (1) +#define NBL_SEC101_ADDR (0x15c248) +#define NBL_SEC102_SIZE (1) +#define NBL_SEC102_ADDR (0xb6432c) +#define NBL_SEC103_SIZE (1) +#define NBL_SEC103_ADDR (0xb64220) +#define NBL_SEC104_SIZE (1) +#define NBL_SEC104_ADDR (0xb44804) +#define NBL_SEC105_SIZE (1) +#define NBL_SEC105_ADDR (0xb44a00) +#define NBL_SEC106_SIZE (1) +#define NBL_SEC106_ADDR (0xe84210) +#define NBL_SEC107_SIZE (1) +#define NBL_SEC107_ADDR (0xe84214) +#define NBL_SEC108_SIZE (1) +#define NBL_SEC108_ADDR (0xe64228) +#define NBL_SEC109_SIZE (1) +#define NBL_SEC109_ADDR (0x65413c) +#define NBL_SEC110_SIZE (1) +#define NBL_SEC110_ADDR (0x984144) +#define NBL_SEC111_SIZE (1) +#define NBL_SEC111_ADDR (0x114130) +#define NBL_SEC112_SIZE (1) +#define NBL_SEC112_ADDR (0x714138) +#define NBL_SEC113_SIZE (1) +#define NBL_SEC113_ADDR (0x114134) +#define NBL_SEC114_SIZE (1) +#define NBL_SEC114_ADDR (0x71413c) +#define NBL_SEC115_SIZE (1) +#define NBL_SEC115_ADDR (0x90437c) +#define NBL_SEC116_SIZE (32) +#define NBL_SEC116_ADDR (0xb05000) +#define NBL_SEC116_REGI(i) (0xb05000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC117_SIZE (1) +#define NBL_SEC117_ADDR (0xb043e0) +#define NBL_SEC118_SIZE (1) +#define NBL_SEC118_ADDR (0xb043f0) +#define NBL_SEC119_SIZE (5) +#define NBL_SEC119_ADDR (0x8c230) +#define NBL_SEC119_REGI(i) (0x8c230 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC120_SIZE (1) +#define NBL_SEC120_ADDR (0x8c1f4) +#define NBL_SEC121_SIZE (1) +#define NBL_SEC121_ADDR (0x2046c4) +#define NBL_SEC122_SIZE (1) +#define NBL_SEC122_ADDR (0x85c1f4) +#define NBL_SEC123_SIZE (1) +#define NBL_SEC123_ADDR (0x75c194) +#define NBL_SEC124_SIZE (256) +#define NBL_SEC124_ADDR (0xa05000) +#define NBL_SEC124_REGI(i) (0xa05000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC125_SIZE (256) +#define NBL_SEC125_ADDR (0xa06000) +#define NBL_SEC125_REGI(i) (0xa06000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC126_SIZE (256) +#define NBL_SEC126_ADDR (0xa07000) +#define NBL_SEC126_REGI(i) (0xa07000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC127_SIZE (1) +#define NBL_SEC127_ADDR (0x75c204) +#define NBL_SEC128_SIZE (1) +#define NBL_SEC128_ADDR (0x15c204) +#define NBL_SEC129_SIZE (1) +#define NBL_SEC129_ADDR (0x75c208) +#define NBL_SEC130_SIZE (1) +#define NBL_SEC130_ADDR (0x15c208) +#define NBL_SEC131_SIZE (1) +#define NBL_SEC131_ADDR (0x75c20c) +#define NBL_SEC132_SIZE (1) +#define NBL_SEC132_ADDR (0x15c20c) +#define NBL_SEC133_SIZE (1) +#define NBL_SEC133_ADDR (0x75c210) +#define NBL_SEC134_SIZE (1) +#define NBL_SEC134_ADDR (0x15c210) +#define NBL_SEC135_SIZE (1) +#define NBL_SEC135_ADDR (0x75c214) +#define NBL_SEC136_SIZE (1) +#define NBL_SEC136_ADDR (0x15c214) +#define NBL_SEC137_SIZE (32) +#define NBL_SEC137_ADDR (0x15d000) +#define NBL_SEC137_REGI(i) (0x15d000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC138_SIZE (32) +#define NBL_SEC138_ADDR (0x75d000) +#define NBL_SEC138_REGI(i) (0x75d000 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC139_SIZE (1) +#define NBL_SEC139_ADDR (0x75c310) +#define NBL_SEC140_SIZE (1) +#define NBL_SEC140_ADDR (0x75c314) +#define NBL_SEC141_SIZE (1) +#define NBL_SEC141_ADDR (0x75c340) +#define NBL_SEC142_SIZE (1) +#define NBL_SEC142_ADDR (0x75c344) +#define NBL_SEC143_SIZE (1) +#define NBL_SEC143_ADDR (0x75c348) +#define NBL_SEC144_SIZE (1) +#define NBL_SEC144_ADDR (0x75c34c) +#define NBL_SEC145_SIZE (32) +#define NBL_SEC145_ADDR (0xb15800) +#define NBL_SEC145_REGI(i) (0xb15800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC146_SIZE (32) +#define NBL_SEC146_ADDR (0xb25800) +#define NBL_SEC146_REGI(i) (0xb25800 + NBL_BYTES_IN_REG * (i)) +#define NBL_SEC147_SIZE (32) +#define NBL_SEC147_ADDR (0xb35800) +#define NBL_SEC147_REGI(i) (0xb35800 + NBL_BYTES_IN_REG * (i)) + +static u32 nbl_sec046_1p_data[] = { + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xa0000000, 0x00077c2b, 0x005c0000, + 0x00000000, 0x00008100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0x00073029, 0x00480000, + 0x00000000, 0x00008100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0x00073029, 0x00480000, + 0x70000000, 0x00000020, 0x24140000, 0x00000020, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xa0000000, 0x00000009, 0x00000000, + 0x00000000, 0x00002100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x00000009, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x70000000, 0x00000000, 0x20140000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x70000000, 0x00000000, 0x20140000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x38430000, + 0x70000006, 0x00000020, 0x24140000, 0x00000020, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x98cb1180, 0x6e36d469, + 0x9d8eb91c, 0x87e3ef47, 0xa2931288, 0x08405c5a, + 0x73865086, 0x00000080, 0x30140000, 0x00000080, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x000b3849, 0x38430000, + 0x00000006, 0x0000c100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x00133889, 0x08400000, + 0x03865086, 0x4c016100, 0x00000014, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec071_1p_data[] = { + 0x00000000, 0x00000000, 0x00113d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7029b00, 0x00000000, + 0x00000000, 0x43000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x51e00000, 0x00000c9c, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00293d00, 0x00000000, + 0x00000000, 0x00000000, 0x67089b00, 0x00000002, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x80000000, 0x00000000, 0xb1e00000, 0x0000189c, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00213d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7069b00, 0x00000001, + 0x00000000, 0x43000000, 0x014b0c70, 0x00000000, + 0x00000000, 0x00000000, 0x92600000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00213d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7069b00, 0x00000001, + 0x00000000, 0x43000000, 0x015b0c70, 0x00000000, + 0x00000000, 0x00000000, 0x92600000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00553d00, 0x00000000, + 0x00000000, 0x00000000, 0xe6d29a00, 0x000149c4, + 0x00000000, 0x4b000000, 0x00000004, 0x00000000, + 0x80000000, 0x00022200, 0x62600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00553d00, 0x00000000, + 0x00000000, 0x00000000, 0xe6d2c000, 0x000149c4, + 0x00000000, 0x5b000000, 0x00000004, 0x00000000, + 0x80000000, 0x00022200, 0x62600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x006d3d00, 0x00000000, + 0x00000000, 0x00000000, 0x64d49200, 0x5e556945, + 0xc666d89a, 0x4b0001a9, 0x00004c84, 0x00000000, + 0x80000000, 0x00022200, 0xc2600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x006d3d00, 0x00000000, + 0x00000000, 0x00000000, 0x6ed4ba00, 0x5ef56bc5, + 0xc666d8c0, 0x5b0001a9, 0x00004dc4, 0x00000000, + 0x80000000, 0x00022200, 0xc2600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000002, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00700000, 0x00000000, 0x08028000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec046_2p_data[] = { + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xa0000000, 0x00077c2b, 0x005c0000, + 0x00000000, 0x00008100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0x00073029, 0x00480000, + 0x00000000, 0x00008100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0x00073029, 0x00480000, + 0x70000000, 0x00000020, 0x04140000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xa0000000, 0x00000009, 0x00000000, + 0x00000000, 0x00002100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x00000009, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x70000000, 0x00000000, 0x00140000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x70000000, 0x00000000, 0x00140000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x38430000, + 0x70000006, 0x00000020, 0x04140000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x98cb1180, 0x6e36d469, + 0x9d8eb91c, 0x87e3ef47, 0xa2931288, 0x08405c5a, + 0x73865086, 0x00000080, 0x10140000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x000b3849, 0x38430000, + 0x00000006, 0x0000c100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x00133889, 0x08400000, + 0x03865086, 0x4c016100, 0x00000014, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec071_2p_data[] = { + 0x00000000, 0x00000000, 0x00113d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7029b00, 0x00000000, + 0x00000000, 0x43000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x51e00000, 0x00000c9c, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00293d00, 0x00000000, + 0x00000000, 0x00000000, 0x67089b00, 0x00000002, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x80000000, 0x00000000, 0xb1e00000, 0x0000189c, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00213d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7069b00, 0x00000001, + 0x00000000, 0x43000000, 0x014b0c70, 0x00000000, + 0x00000000, 0x00000000, 0x92600000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00213d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7069b00, 0x00000001, + 0x00000000, 0x43000000, 0x015b0c70, 0x00000000, + 0x00000000, 0x00000000, 0x92600000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00553d00, 0x00000000, + 0x00000000, 0x00000000, 0xe6d29a00, 0x000149c4, + 0x00000000, 0x4b000000, 0x00000004, 0x00000000, + 0x80000000, 0x00022200, 0x62600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00553d00, 0x00000000, + 0x00000000, 0x00000000, 0xe6d2c000, 0x000149c4, + 0x00000000, 0x5b000000, 0x00000004, 0x00000000, + 0x80000000, 0x00022200, 0x62600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x006d3d00, 0x00000000, + 0x00000000, 0x00000000, 0x64d49200, 0x5e556945, + 0xc666d89a, 0x4b0001a9, 0x00004c84, 0x00000000, + 0x80000000, 0x00022200, 0xc2600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x006d3d00, 0x00000000, + 0x00000000, 0x00000000, 0x6ed4ba00, 0x5ef56bc5, + 0xc666d8c0, 0x5b0001a9, 0x00004dc4, 0x00000000, + 0x80000000, 0x00022200, 0xc2600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000002, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00700000, 0x00000000, 0x00028000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec006_data[] = { + 0x81008100, 0x00000001, 0x88a88100, 0x00000001, + 0x810088a8, 0x00000001, 0x88a888a8, 0x00000001, + 0x81000000, 0x00000001, 0x88a80000, 0x00000001, + 0x00000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x08004000, 0x00000001, 0x86dd6000, 0x00000001, + 0x81000000, 0x00000001, 0x88a80000, 0x00000001, + 0x08060000, 0x00000001, 0x80350000, 0x00000001, + 0x88080000, 0x00000001, 0x88f70000, 0x00000001, + 0x88cc0000, 0x00000001, 0x88090000, 0x00000001, + 0x89150000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000001, + 0x11006000, 0x00000001, 0x06006000, 0x00000001, + 0x02006000, 0x00000001, 0x3a006000, 0x00000001, + 0x2f006000, 0x00000001, 0x84006000, 0x00000001, + 0x32006000, 0x00000001, 0x2c006000, 0x00000001, + 0x3c006000, 0x00000001, 0x2b006000, 0x00000001, + 0x00006000, 0x00000001, 0x00004000, 0x00000001, + 0x00004000, 0x00000001, 0x20004000, 0x00000001, + 0x40004000, 0x00000001, 0x00000000, 0x00000001, + 0x11000000, 0x00000001, 0x06000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x2f000000, 0x00000001, 0x84000000, 0x00000001, + 0x32000000, 0x00000001, 0x2c000000, 0x00000001, + 0x2b000000, 0x00000001, 0x3c000000, 0x00000001, + 0x3b000000, 0x00000001, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x11000000, 0x00000001, 0x06000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x2f000000, 0x00000001, 0x84000000, 0x00000001, + 0x32000000, 0x00000001, 0x00000000, 0x00000000, + 0x2c000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x2b000000, 0x00000001, 0x3c000000, 0x00000001, + 0x3b000000, 0x00000001, 0x00000000, 0x00000001, + 0x06001072, 0x00000001, 0x06000000, 0x00000001, + 0x110017c1, 0x00000001, 0x110012b7, 0x00000001, + 0x110012b5, 0x00000001, 0x01000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x11000043, 0x00000001, 0x11000044, 0x00000001, + 0x11000222, 0x00000001, 0x11000000, 0x00000001, + 0x2f006558, 0x00000001, 0x32000000, 0x00000001, + 0x84000000, 0x00000001, 0x00000000, 0x00000001, + 0x65582000, 0x00000001, 0x65583000, 0x00000001, + 0x6558a000, 0x00000001, 0x6558b000, 0x00000001, + 0x65580000, 0x00000001, 0x12b50000, 0x00000001, + 0x02000102, 0x00000001, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x65580000, 0x00000001, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x81008100, 0x00000001, 0x88a88100, 0x00000001, + 0x810088a8, 0x00000001, 0x88a888a8, 0x00000001, + 0x81000000, 0x00000001, 0x88a80000, 0x00000001, + 0x00000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x08004000, 0x00000001, 0x86dd6000, 0x00000001, + 0x81000000, 0x00000001, 0x88a80000, 0x00000001, + 0x08060000, 0x00000001, 0x80350000, 0x00000001, + 0x88080000, 0x00000001, 0x88f70000, 0x00000001, + 0x88cc0000, 0x00000001, 0x88090000, 0x00000001, + 0x89150000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000001, + 0x11006000, 0x00000001, 0x06006000, 0x00000001, + 0x02006000, 0x00000001, 0x3a006000, 0x00000001, + 0x2f006000, 0x00000001, 0x84006000, 0x00000001, + 0x32006000, 0x00000001, 0x2c006000, 0x00000001, + 0x3c006000, 0x00000001, 0x2b006000, 0x00000001, + 0x00006000, 0x00000001, 0x00004000, 0x00000001, + 0x00004000, 0x00000001, 0x20004000, 0x00000001, + 0x40004000, 0x00000001, 0x00000000, 0x00000001, + 0x11000000, 0x00000001, 0x06000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x2f000000, 0x00000001, 0x84000000, 0x00000001, + 0x32000000, 0x00000001, 0x2c000000, 0x00000001, + 0x2b000000, 0x00000001, 0x3c000000, 0x00000001, + 0x3b000000, 0x00000001, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x11000000, 0x00000001, 0x06000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x2f000000, 0x00000001, 0x84000000, 0x00000001, + 0x32000000, 0x00000001, 0x00000000, 0x00000000, + 0x2c000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x2b000000, 0x00000001, 0x3c000000, 0x00000001, + 0x3b000000, 0x00000001, 0x00000000, 0x00000001, + 0x06001072, 0x00000001, 0x06000000, 0x00000001, + 0x110012b7, 0x00000001, 0x01000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x32000000, 0x00000001, 0x84000000, 0x00000001, + 0x11000043, 0x00000001, 0x11000044, 0x00000001, + 0x11000222, 0x00000001, 0x11000000, 0x00000001, + 0x2f006558, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec007_data[] = { + 0x10001000, 0x00001000, 0x10000000, 0x00000000, + 0x1000ffff, 0x0000ffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000fff, 0x00000fff, 0x1000ffff, 0x0000ffff, + 0x0000ffff, 0x0000ffff, 0x0000ffff, 0x0000ffff, + 0x0000ffff, 0x0000ffff, 0x0000ffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, + 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, + 0x00ff0fff, 0x10ff0fff, 0xffff0fff, 0x00000fff, + 0x1fff0fff, 0x1fff0fff, 0x1fff0fff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0x00ffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0x00ff0000, 0x00ffffff, 0x00ff0000, 0x00ff0000, + 0x00ff0000, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ff0000, 0x00ff0000, 0x00ff0001, 0x00ffffff, + 0x00ff0000, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0x00000fff, 0x00000fff, 0x00000fff, 0x00000fff, + 0x00000fff, 0x0000ffff, 0xc0ff0000, 0xc0ffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x0000ffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x10001000, 0x00001000, 0x10000000, 0x00000000, + 0x1000ffff, 0x0000ffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000fff, 0x00000fff, 0x1000ffff, 0x0000ffff, + 0x0000ffff, 0x0000ffff, 0x0000ffff, 0x0000ffff, + 0x0000ffff, 0x0000ffff, 0x0000ffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, + 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, + 0x00ff0fff, 0x10ff0fff, 0xffff0fff, 0x00000fff, + 0x1fff0fff, 0x1fff0fff, 0x1fff0fff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0x00ffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0x00ff0000, 0x00ffffff, 0x00ff0000, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ff0000, 0x00ff0000, 0x00ff0001, 0x00ffffff, + 0x00ff0000, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, +}; + +static u32 nbl_sec008_data[] = { + 0x00809190, 0x16009496, 0x00000100, 0x00000000, + 0x00809190, 0x16009496, 0x00000100, 0x00000000, + 0x00809190, 0x16009496, 0x00000100, 0x00000000, + 0x00809190, 0x16009496, 0x00000100, 0x00000000, + 0x00800090, 0x12009092, 0x00000100, 0x00000000, + 0x00800090, 0x12009092, 0x00000100, 0x00000000, + 0x00800000, 0x0e008c8e, 0x00000100, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x08909581, 0x00008680, 0x00000200, 0x00000000, + 0x10900082, 0x28008680, 0x00000200, 0x00000000, + 0x809b0093, 0x00000000, 0x00000100, 0x00000000, + 0x809b0093, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b0000, 0x00000000, 0x00000100, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x009b0000, 0x00000000, 0x00000100, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000200, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000200, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000200, 0x00000000, + 0x40000000, 0x01c180c2, 0x00000300, 0x00000000, + 0x00000000, 0x00a089c2, 0x000005f0, 0x00000000, + 0x000b0085, 0x00a00000, 0x000002f0, 0x00000000, + 0x000b0085, 0x00a00000, 0x000002f0, 0x00000000, + 0x00000000, 0x00a089c2, 0x000005f0, 0x00000000, + 0x000b0000, 0x00000000, 0x00000200, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000300, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000300, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000300, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000300, 0x00000000, + 0x40000000, 0x01c180c2, 0x00000400, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000400, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000400, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000400, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000400, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000400, 0x00000000, + 0x01ab0083, 0x0ca00000, 0x0000050f, 0x00000000, + 0x01ab0083, 0x0ca00000, 0x0000050f, 0x00000000, + 0x02a00084, 0x08008890, 0x00000600, 0x00000000, + 0x02ab848a, 0x08000000, 0x00000500, 0x00000000, + 0x02a00084, 0x10008200, 0x00000600, 0x00000000, + 0x00ab8f8e, 0x04000000, 0x00000500, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000500, 0x00000000, + 0x00ab8f8e, 0x04000000, 0x00000500, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000500, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000500, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000500, 0x00000000, + 0x02ab0084, 0x08000000, 0x00000500, 0x00000000, + 0x00a00000, 0x04008280, 0x00000600, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000500, 0x00000000, + 0x04ab8e84, 0x0c000000, 0x00000500, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000500, 0x00000000, + 0x00000000, 0x0400ccd0, 0x00000800, 0x00000000, + 0x00000000, 0x0800ccd0, 0x00000800, 0x00000000, + 0x00000000, 0x0800ccd0, 0x00000800, 0x00000000, + 0x00000000, 0x0c00ccd0, 0x00000800, 0x00000000, + 0x00000000, 0x0000ccd0, 0x00000800, 0x00000000, + 0x00000000, 0x0000ccd0, 0x00000800, 0x00000000, + 0x00000000, 0x10008200, 0x00000700, 0x00000000, + 0x00000000, 0x08008200, 0x00000700, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x0000ccd0, 0x00000800, 0x00000000, + 0x00000000, 0x0000ccd0, 0x00000800, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00808786, 0x16009496, 0x00000900, 0x00000000, + 0x00808786, 0x16009496, 0x00000900, 0x00000000, + 0x00808786, 0x16009496, 0x00000900, 0x00000000, + 0x00808786, 0x16009496, 0x00000900, 0x00000000, + 0x00800086, 0x12009092, 0x00000900, 0x00000000, + 0x00800086, 0x12009092, 0x00000900, 0x00000000, + 0x00800000, 0x0e008c8e, 0x00000900, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x08908192, 0x00008680, 0x00000a00, 0x00000000, + 0x10908292, 0x28008680, 0x00000a00, 0x00000000, + 0x809b9392, 0x00000000, 0x00000900, 0x00000000, + 0x809b9392, 0x00000000, 0x00000900, 0x00000000, + 0x009b8f92, 0x00000000, 0x00000900, 0x00000000, + 0x009b8f92, 0x00000000, 0x00000900, 0x00000000, + 0x009b8f92, 0x00000000, 0x00000900, 0x00000000, + 0x009b8f92, 0x00000000, 0x00000900, 0x00000000, + 0x009b8f92, 0x00000000, 0x00000900, 0x00000000, + 0x009b8f92, 0x00000000, 0x00000900, 0x00000000, + 0x009b0092, 0x00000000, 0x00000900, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x009b0092, 0x00000000, 0x00000900, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000a00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000a00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000a00, 0x00000000, + 0x40000000, 0x01c180c2, 0x00000b00, 0x00000000, + 0x00000000, 0x00a089c2, 0x00000df0, 0x00000000, + 0x000b0085, 0x00a00000, 0x00000af0, 0x00000000, + 0x000b0085, 0x00a00000, 0x00000af0, 0x00000000, + 0x00000000, 0x00a089c2, 0x00000df0, 0x00000000, + 0x000b0000, 0x00000000, 0x00000a00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000b00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000b00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000b00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000b00, 0x00000000, + 0x40000000, 0x01c180c2, 0x00000c00, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000082, 0x00000d00, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000c00, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000c00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000c00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000c00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000c00, 0x00000000, + 0x01ab0083, 0x0ca00000, 0x00000d0f, 0x00000000, + 0x01ab0083, 0x0ca00000, 0x00000d0f, 0x00000000, + 0x02ab8a84, 0x08000000, 0x00000d00, 0x00000000, + 0x00ab8f8e, 0x04000000, 0x00000d00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000d00, 0x00000000, + 0x00ab8f8e, 0x04000000, 0x00000d00, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000d00, 0x00000000, + 0x04ab8e84, 0x0c000000, 0x00000d00, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000d00, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000d00, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000d00, 0x00000000, + 0x02ab0084, 0x08000000, 0x00000d00, 0x00000000, + 0x00ab0000, 0x04000000, 0x00000d00, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000d00, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec009_data[] = { + 0x00000000, 0x00000060, 0x00000000, 0x00000090, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000050, 0x00000000, 0x000000a0, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x000000a0, 0x00000000, 0x00000050, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000800, 0x00000000, 0x00000700, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000900, 0x00000000, 0x00000600, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00008000, 0x00000000, 0x00007000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00009000, 0x00000000, 0x00006000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x0000a000, 0x00000000, 0x00005000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x000c0000, 0x00000000, 0x00030000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x000d0000, 0x00000000, 0x00020000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x000e0000, 0x00000000, 0x00010000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000040, 0x00000000, 0x000000b0, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000070, 0x00000000, 0x00000080, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000090, 0x00000000, 0x00000060, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000080, 0x00000000, 0x00000070, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000700, 0x00000000, 0x00000800, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00007000, 0x00000000, 0x00008000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00080000, 0x00000000, 0x00070000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000c00, 0x00000000, 0x00000300, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000d00, 0x00000000, 0x00000200, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00600000, 0x00000000, 0x00900000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00d00000, 0x00000000, 0x00200000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00500000, 0x00000000, 0x00a00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00700000, 0x00000000, 0x00800000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00e00000, 0x00000000, 0x00100000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00f00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00f00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00100000, 0x00000000, 0x00e00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00300000, 0x00000000, 0x00c00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00800000, 0x00000000, 0x00700000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00900000, 0x00000000, 0x00600000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00a00000, 0x00000000, 0x00500000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00b00000, 0x00000000, 0x00400000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000060, 0x00400000, 0x00000090, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000050, 0x00400000, 0x000000a0, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000000a0, 0x00400000, 0x00000050, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000800, 0x00400000, 0x00000700, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000900, 0x00400000, 0x00000600, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00008000, 0x00400000, 0x00007000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00009000, 0x00400000, 0x00006000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x0000a000, 0x00400000, 0x00005000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000c0000, 0x00400000, 0x00030000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000d0000, 0x00400000, 0x00020000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000e0000, 0x00400000, 0x00010000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000070, 0x00400000, 0x00000080, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000700, 0x00400000, 0x00000800, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00007000, 0x00400000, 0x00008000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00080000, 0x00400000, 0x00070000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000c00, 0x00400000, 0x00000300, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000d00, 0x00400000, 0x00000200, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000040, 0x00400000, 0x000000b0, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000090, 0x00400000, 0x00000060, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000080, 0x00400000, 0x00000070, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000060, 0x06000000, 0x00000090, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000060, 0x07000000, 0x00000090, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000050, 0x06000000, 0x000000a0, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000050, 0x07000000, 0x000000a0, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000000a0, 0x06000000, 0x00000050, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000000a0, 0x07000000, 0x00000050, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000800, 0x06000000, 0x00000700, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000900, 0x06000000, 0x00000600, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00008000, 0x06000000, 0x00007000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00009000, 0x06000000, 0x00006000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x0000a000, 0x06000000, 0x00005000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000c0000, 0x06000000, 0x00030000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000d0000, 0x06000000, 0x00020000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000e0000, 0x06000000, 0x00010000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000800, 0x07000000, 0x00000700, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000900, 0x07000000, 0x00000600, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00008000, 0x07000000, 0x00007000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00009000, 0x07000000, 0x00006000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x0000a000, 0x07000000, 0x00005000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000c0000, 0x07000000, 0x00030000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000d0000, 0x07000000, 0x00020000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000e0000, 0x07000000, 0x00010000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000070, 0x06000000, 0x00000080, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000070, 0x07000000, 0x00000080, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000700, 0x06000000, 0x00000800, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00007000, 0x06000000, 0x00008000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00080000, 0x06000000, 0x00070000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000c00, 0x06000000, 0x00000300, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000d00, 0x06000000, 0x00000200, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000700, 0x07000000, 0x00000800, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00007000, 0x07000000, 0x00008000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00080000, 0x07000000, 0x00070000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000c00, 0x07000000, 0x00000300, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000d00, 0x07000000, 0x00000200, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000040, 0x06000000, 0x000000b0, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000040, 0x07000000, 0x000000b0, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000090, 0x06000000, 0x00000060, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000090, 0x07000000, 0x00000060, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000080, 0x06000000, 0x00000070, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000080, 0x07000000, 0x00000070, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000060, 0x00c00000, 0x00000090, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000050, 0x00c00000, 0x000000a0, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000000a0, 0x00c00000, 0x00000050, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000800, 0x00c00000, 0x00000700, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000900, 0x00c00000, 0x00000600, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00008000, 0x00c00000, 0x00007000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00009000, 0x00c00000, 0x00006000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x0000a000, 0x00c00000, 0x00005000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000c0000, 0x00c00000, 0x00030000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000d0000, 0x00c00000, 0x00020000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000e0000, 0x00c00000, 0x00010000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000070, 0x00c00000, 0x00000080, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000700, 0x00c00000, 0x00000800, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00007000, 0x00c00000, 0x00008000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00080000, 0x00c00000, 0x00070000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000c00, 0x00c00000, 0x00000300, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000d00, 0x00c00000, 0x00000200, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000040, 0x00c00000, 0x000000b0, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000090, 0x00c00000, 0x00000060, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000080, 0x00c00000, 0x00000070, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00400000, 0x00400000, 0x00b00000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00600000, 0x00400000, 0x00900000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00300000, 0x00400000, 0x00c00000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00500000, 0x00400000, 0x00a00000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00700000, 0x00400000, 0x00800000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00200000, 0x00400000, 0x00d00000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00800000, 0x00400000, 0x00700000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00900000, 0x00400000, 0x00600000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00a00000, 0x00400000, 0x00500000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00b00000, 0x00400000, 0x00400000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00400000, 0x00f00000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00400000, 0x00f00000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00100000, 0x00400000, 0x00e00000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00400000, 0x06000000, 0x00b00000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00400000, 0x07000000, 0x00b00000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00600000, 0x06000000, 0x00900000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00600000, 0x07000000, 0x00900000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00300000, 0x06000000, 0x00c00000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00300000, 0x07000000, 0x00c00000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00500000, 0x06000000, 0x00a00000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00500000, 0x07000000, 0x00a00000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00700000, 0x06000000, 0x00800000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00700000, 0x07000000, 0x00800000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00200000, 0x06000000, 0x00d00000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00200000, 0x07000000, 0x00d00000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00800000, 0x06000000, 0x00700000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00900000, 0x06000000, 0x00600000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00a00000, 0x06000000, 0x00500000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00b00000, 0x06000000, 0x00400000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00800000, 0x07000000, 0x00700000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00900000, 0x07000000, 0x00600000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00a00000, 0x07000000, 0x00500000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00b00000, 0x07000000, 0x00400000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x06000000, 0x00f00000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x07000000, 0x00f00000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x06000000, 0x00f00000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00100000, 0x06000000, 0x00e00000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x07000000, 0x00f00000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00100000, 0x07000000, 0x00e00000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00400000, 0x00c00000, 0x00b00000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00600000, 0x00c00000, 0x00900000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00300000, 0x00c00000, 0x00c00000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00500000, 0x00c00000, 0x00a00000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00700000, 0x00c00000, 0x00800000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00200000, 0x00c00000, 0x00d00000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00800000, 0x00c00000, 0x00700000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00900000, 0x00c00000, 0x00600000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00a00000, 0x00c00000, 0x00500000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00b00000, 0x00c00000, 0x00400000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00c00000, 0x00f00000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00c00000, 0x00f00000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00100000, 0x00c00000, 0x00e00000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000f0000, 0x00400000, 0x00000000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00f00000, 0x00400000, 0x00000000, 0x00b00000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000f0000, 0x06000000, 0x00000000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00f00000, 0x06000000, 0x00000000, 0x09000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000f0000, 0x07000000, 0x00000000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00f00000, 0x07000000, 0x00000000, 0x08000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x000f0000, 0x00c00000, 0x00000000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00f00000, 0x00c00000, 0x00000000, 0x00300000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x000f0000, 0x00000000, 0x00000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00f00000, 0x00000000, 0x00000000, + 0x00000001, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec010_data[] = { + 0x0000000a, 0x0000000a, 0x0000000a, 0x0000000a, + 0x0000000a, 0x0000000a, 0x0000000a, 0x0000000a, + 0x0000000a, 0x0000000a, 0x0000000a, 0x00000000, + 0x0000000b, 0x00000008, 0x00000009, 0x0000000f, + 0x0000000f, 0x0000000f, 0x0000000f, 0x0000000f, + 0x0000000c, 0x0000000d, 0x00000001, 0x00000001, + 0x0000000e, 0x00000005, 0x00000002, 0x00000002, + 0x00000004, 0x00000003, 0x00000003, 0x00000003, + 0x00000003, 0x00000040, 0x00000040, 0x00000040, + 0x00000040, 0x00000040, 0x00000040, 0x00000040, + 0x00000040, 0x00000040, 0x00000040, 0x00000040, + 0x00000045, 0x00000044, 0x00000044, 0x00000044, + 0x00000044, 0x00000044, 0x00000041, 0x00000042, + 0x00000043, 0x00000046, 0x00000046, 0x00000046, + 0x00000046, 0x00000046, 0x00000046, 0x00000046, + 0x00000046, 0x00000046, 0x00000046, 0x00000046, + 0x00000046, 0x00000046, 0x00000046, 0x00000046, + 0x00000046, 0x00000046, 0x00000046, 0x00000046, + 0x00000046, 0x00000046, 0x00000046, 0x0000004b, + 0x0000004b, 0x0000004a, 0x0000004a, 0x0000004a, + 0x0000004a, 0x0000004a, 0x0000004a, 0x0000004a, + 0x0000004a, 0x0000004a, 0x0000004a, 0x00000047, + 0x00000047, 0x00000048, 0x00000048, 0x00000049, + 0x00000049, 0x0000004c, 0x0000004c, 0x0000004c, + 0x0000004c, 0x0000004c, 0x0000004c, 0x0000004c, + 0x0000004c, 0x0000004c, 0x0000004c, 0x0000004c, + 0x00000051, 0x00000050, 0x00000050, 0x00000050, + 0x00000050, 0x00000050, 0x0000004d, 0x0000004e, + 0x0000004f, 0x00000052, 0x00000053, 0x00000054, + 0x00000054, 0x00000055, 0x00000056, 0x00000057, + 0x00000057, 0x00000057, 0x00000057, 0x00000058, + 0x00000059, 0x00000059, 0x0000005a, 0x0000005a, + 0x0000005b, 0x0000005b, 0x0000005c, 0x0000005c, + 0x0000005c, 0x0000005c, 0x0000005d, 0x0000005d, + 0x0000005e, 0x0000005e, 0x0000005f, 0x0000005f, + 0x0000005f, 0x0000005f, 0x0000005f, 0x0000005f, + 0x0000005f, 0x0000005f, 0x00000060, 0x00000060, + 0x00000061, 0x00000061, 0x00000061, 0x00000061, + 0x00000062, 0x00000063, 0x00000064, 0x00000064, + 0x00000065, 0x00000066, 0x00000067, 0x00000067, + 0x00000067, 0x00000067, 0x00000068, 0x00000069, + 0x00000069, 0x00000040, 0x00000040, 0x00000046, + 0x00000046, 0x00000046, 0x00000046, 0x0000004c, + 0x0000004c, 0x0000000a, 0x0000000a, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec011_data[] = { + 0x0008002c, 0x00080234, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080230, + 0x00080332, 0x0008063c, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x0008002c, 0x00080234, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080230, + 0x00080332, 0x00080738, 0x0008083c, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x0008002c, 0x00080234, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080230, + 0x00080332, 0x00080738, 0x0008093a, 0x00080a3c, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080634, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080730, 0x00080834, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080730, 0x00080932, 0x00080a34, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00090200, 0x00090304, 0x00090408, 0x0009050c, + 0x00090610, 0x00090714, 0x00090818, 0x0009121c, + 0x0009131e, 0x00000000, 0x00000000, 0x00000000, + 0x00090644, 0x00000000, 0x000d8045, 0x000d4145, + 0x0009030c, 0x0009041c, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00090145, 0x00090944, 0x00000000, 0x00000000, + 0x0009061c, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x0009033a, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00090200, 0x00090304, 0x00090408, 0x0009050c, + 0x00090610, 0x00090714, 0x00090818, 0x0009121c, + 0x0009131e, 0x00000000, 0x00000000, 0x00000000, + 0x0009063d, 0x00090740, 0x000d803f, 0x000d413f, + 0x0009030c, 0x0009041c, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x0009013f, 0x00090840, 0x000dc93d, 0x000d093d, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a0324, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a003e, + 0x000a0140, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a0324, 0x000a0520, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a003e, + 0x000a0140, 0x000a0842, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a0124, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a0224, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a003c, 0x000a0037, 0x000ec139, 0x000e0139, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x000a0742, 0x00000000, 0x00000000, + 0x000a0d41, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x00000000, 0x00000000, 0x00000000, + 0x000a0d3e, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a0037, 0x000a0139, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080634, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080730, 0x00080834, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080730, 0x00080932, 0x00080a34, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x0009061c, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x0009033a, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00090200, 0x00090304, 0x00090408, 0x0009050c, + 0x00090610, 0x00090714, 0x00090818, 0x0009121c, + 0x0009131e, 0x00000000, 0x00000000, 0x00000000, + 0x0009063d, 0x00090740, 0x000d803f, 0x000d413f, + 0x0009030c, 0x0009041c, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x0009013f, 0x00090840, 0x000dc93d, 0x000d093d, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a003c, 0x000a0037, 0x000ec139, 0x000e0139, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x000a0742, 0x00000000, 0x00000000, + 0x000a0d41, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x00000000, 0x00000000, 0x00000000, + 0x000a0d3e, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a0037, 0x000a0139, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec012_data[] = { + 0x00000006, 0x00000001, 0x00000004, 0x00000001, + 0x00000006, 0x00000001, 0x00000000, 0x00000001, + 0x00000004, 0x00000001, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000010, 0x00000001, 0x00000000, 0x00000001, + 0x00000040, 0x00000001, 0x00000010, 0x00000001, + 0x00000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x06200000, 0x00000001, 0x00c00000, 0x00000001, + 0x02c00000, 0x00000001, 0x00200000, 0x00000001, + 0x00400000, 0x00000001, 0x00700000, 0x00000001, + 0x00300000, 0x00000001, 0x00000000, 0x00000001, + 0x00a00000, 0x00000001, 0x00b00000, 0x00000001, + 0x00e00000, 0x00000001, 0x00500000, 0x00000001, + 0x00800000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000004, 0x00000001, 0x00000000, 0x00000001, + 0x00000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000040, 0x00000001, 0x00000010, 0x00000001, + 0x00000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00500000, 0x00000001, 0x00700000, 0x00000001, + 0x00a00000, 0x00000001, 0x00b00000, 0x00000001, + 0x00200000, 0x00000001, 0x00000000, 0x00000001, + 0x00300000, 0x00000001, 0x00800000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec013_data[] = { + 0xf7fffff0, 0xf7fffff1, 0xfffffff0, 0xf7fffff3, + 0xfffffff1, 0xfffffff3, 0xffffffff, 0xffffffff, + 0xf7ffff0f, 0xf7ffff0f, 0xffffff0f, 0xffffff0f, + 0xffffff0f, 0xffffffff, 0xffffffff, 0xffffffff, + 0x100fffff, 0xf10fffff, 0xf10fffff, 0xf70fffff, + 0xf70fffff, 0xff0fffff, 0xff0fffff, 0xff1fffff, + 0xff0fffff, 0xff0fffff, 0xff0fffff, 0xff0fffff, + 0xff1fffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xfffffff1, 0xfffffff3, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffff0f, 0xffffff0f, 0xffffff0f, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xff0fffff, 0xff0fffff, 0xff0fffff, 0xff0fffff, + 0xff0fffff, 0xff1fffff, 0xff0fffff, 0xff1fffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, +}; + +static u32 nbl_sec014_data[] = { + 0x00000000, 0x00000001, 0x00000003, 0x00000002, + 0x00000004, 0x00000005, 0x00000000, 0x00000000, + 0x00000000, 0x00000001, 0x00000002, 0x00000003, + 0x00000004, 0x00000000, 0x00000000, 0x00000000, + 0x00000001, 0x00000002, 0x00000003, 0x00000000, + 0x00000000, 0x00000004, 0x00000005, 0x00000006, + 0x00000000, 0x00000000, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000001, 0x00000002, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000001, 0x00000002, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000001, 0x00000001, 0x00000001, + 0x00000002, 0x00000003, 0x00000004, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec022_data[] = { + 0x81008100, 0x00000001, 0x88a88100, 0x00000001, + 0x810088a8, 0x00000001, 0x88a888a8, 0x00000001, + 0x81000000, 0x00000001, 0x88a80000, 0x00000001, + 0x00000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x08004000, 0x00000001, 0x86dd6000, 0x00000001, + 0x81000000, 0x00000001, 0x88a80000, 0x00000001, + 0x08060000, 0x00000001, 0x80350000, 0x00000001, + 0x88080000, 0x00000001, 0x88f70000, 0x00000001, + 0x88cc0000, 0x00000001, 0x88090000, 0x00000001, + 0x89150000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000001, + 0x11006000, 0x00000001, 0x06006000, 0x00000001, + 0x02006000, 0x00000001, 0x3a006000, 0x00000001, + 0x2f006000, 0x00000001, 0x84006000, 0x00000001, + 0x32006000, 0x00000001, 0x2c006000, 0x00000001, + 0x3c006000, 0x00000001, 0x2b006000, 0x00000001, + 0x00006000, 0x00000001, 0x00004000, 0x00000001, + 0x00004000, 0x00000001, 0x20004000, 0x00000001, + 0x40004000, 0x00000001, 0x00000000, 0x00000001, + 0x11000000, 0x00000001, 0x06000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x2f000000, 0x00000001, 0x84000000, 0x00000001, + 0x32000000, 0x00000001, 0x2c000000, 0x00000001, + 0x2b000000, 0x00000001, 0x3c000000, 0x00000001, + 0x3b000000, 0x00000001, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x11000000, 0x00000001, 0x06000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x2f000000, 0x00000001, 0x84000000, 0x00000001, + 0x32000000, 0x00000001, 0x00000000, 0x00000000, + 0x2c000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x2b000000, 0x00000001, 0x3c000000, 0x00000001, + 0x3b000000, 0x00000001, 0x00000000, 0x00000001, + 0x06001072, 0x00000001, 0x06000000, 0x00000001, + 0x110012b7, 0x00000001, 0x01000000, 0x00000001, + 0x02000000, 0x00000001, 0x3a000000, 0x00000001, + 0x32000000, 0x00000001, 0x84000000, 0x00000001, + 0x11000043, 0x00000001, 0x11000044, 0x00000001, + 0x11000222, 0x00000001, 0x11000000, 0x00000001, + 0x2f006558, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec023_data[] = { + 0x10001000, 0x00001000, 0x10000000, 0x00000000, + 0x1000ffff, 0x0000ffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000fff, 0x00000fff, 0x1000ffff, 0x0000ffff, + 0x0000ffff, 0x0000ffff, 0x0000ffff, 0x0000ffff, + 0x0000ffff, 0x0000ffff, 0x0000ffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, + 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, 0x00ff0fff, + 0x00ff0fff, 0x10ff0fff, 0xffff0fff, 0x00000fff, + 0x1fff0fff, 0x1fff0fff, 0x1fff0fff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0x00ffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0xffffffff, + 0x00ff0000, 0x00ffffff, 0x00ff0000, 0x00ffffff, + 0x00ffffff, 0x00ffffff, 0x00ffffff, 0x00ffffff, + 0x00ff0000, 0x00ff0000, 0x00ff0001, 0x00ffffff, + 0x00ff0000, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, +}; + +static u32 nbl_sec024_data[] = { + 0x00809190, 0x16009496, 0x00000100, 0x00000000, + 0x00809190, 0x16009496, 0x00000100, 0x00000000, + 0x00809190, 0x16009496, 0x00000100, 0x00000000, + 0x00809190, 0x16009496, 0x00000100, 0x00000000, + 0x00800090, 0x12009092, 0x00000100, 0x00000000, + 0x00800090, 0x12009092, 0x00000100, 0x00000000, + 0x00800000, 0x0e008c8e, 0x00000100, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x08900081, 0x00008680, 0x00000200, 0x00000000, + 0x10900082, 0x28008680, 0x00000200, 0x00000000, + 0x809b0093, 0x00000000, 0x00000100, 0x00000000, + 0x809b0093, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b008f, 0x00000000, 0x00000100, 0x00000000, + 0x009b0000, 0x00000000, 0x00000100, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x009b0000, 0x00000000, 0x00000100, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000200, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000200, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000200, 0x00000000, + 0x40000000, 0x01c180c2, 0x00000300, 0x00000000, + 0x00000000, 0x00a089c2, 0x000005f0, 0x00000000, + 0x000b0085, 0x00a00000, 0x000002f0, 0x00000000, + 0x000b0085, 0x00a00000, 0x000002f0, 0x00000000, + 0x00000000, 0x00a089c2, 0x000005f0, 0x00000000, + 0x000b0000, 0x00000000, 0x00000200, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000300, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000300, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000300, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000300, 0x00000000, + 0x40000000, 0x01c180c2, 0x00000400, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000082, 0x00000500, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00ab0085, 0x08000000, 0x00000400, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000400, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000400, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000400, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000400, 0x00000000, + 0x01ab0083, 0x0ca00000, 0x0000050f, 0x00000000, + 0x01ab0083, 0x0ca00000, 0x0000050f, 0x00000000, + 0x02ab848a, 0x08000000, 0x00000500, 0x00000000, + 0x00ab8f8e, 0x04000000, 0x00000500, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000500, 0x00000000, + 0x00ab8f8e, 0x04000000, 0x00000500, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000500, 0x00000000, + 0x04ab8e84, 0x0c000000, 0x00000500, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000500, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000500, 0x00000000, + 0x02ab848f, 0x08000000, 0x00000500, 0x00000000, + 0x02ab0084, 0x08000000, 0x00000500, 0x00000000, + 0x00ab0000, 0x04000000, 0x00000500, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00ab0000, 0x00000000, 0x00000500, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec025_data[] = { + 0x00000060, 0x00000090, 0x00000001, 0x00000000, + 0x00000050, 0x000000a0, 0x00000001, 0x00000000, + 0x000000a0, 0x00000050, 0x00000001, 0x00000000, + 0x00000800, 0x00000700, 0x00000001, 0x00000000, + 0x00000900, 0x00000600, 0x00000001, 0x00000000, + 0x00008000, 0x00007000, 0x00000001, 0x00000000, + 0x00009000, 0x00006000, 0x00000001, 0x00000000, + 0x0000a000, 0x00005000, 0x00000001, 0x00000000, + 0x000c0000, 0x00030000, 0x00000001, 0x00000000, + 0x000d0000, 0x00020000, 0x00000001, 0x00000000, + 0x000e0000, 0x00010000, 0x00000001, 0x00000000, + 0x00000040, 0x000000b0, 0x00000001, 0x00000000, + 0x00000070, 0x00000080, 0x00000001, 0x00000000, + 0x00000090, 0x00000060, 0x00000001, 0x00000000, + 0x00000080, 0x00000070, 0x00000001, 0x00000000, + 0x00000700, 0x00000800, 0x00000001, 0x00000000, + 0x00007000, 0x00008000, 0x00000001, 0x00000000, + 0x00080000, 0x00070000, 0x00000001, 0x00000000, + 0x00000c00, 0x00000300, 0x00000001, 0x00000000, + 0x00000d00, 0x00000200, 0x00000001, 0x00000000, + 0x00400000, 0x00b00000, 0x00000001, 0x00000000, + 0x00600000, 0x00900000, 0x00000001, 0x00000000, + 0x00300000, 0x00c00000, 0x00000001, 0x00000000, + 0x00500000, 0x00a00000, 0x00000001, 0x00000000, + 0x00700000, 0x00800000, 0x00000001, 0x00000000, + 0x00000000, 0x00f00000, 0x00000001, 0x00000000, + 0x00000000, 0x00f00000, 0x00000001, 0x00000000, + 0x00100000, 0x00e00000, 0x00000001, 0x00000000, + 0x00200000, 0x00d00000, 0x00000001, 0x00000000, + 0x00800000, 0x00700000, 0x00000001, 0x00000000, + 0x00900000, 0x00600000, 0x00000001, 0x00000000, + 0x00a00000, 0x00500000, 0x00000001, 0x00000000, + 0x00b00000, 0x00400000, 0x00000001, 0x00000000, + 0x000f0000, 0x00000000, 0x00000001, 0x00000000, + 0x00f00000, 0x00000000, 0x00000001, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec026_data[] = { + 0x0000000a, 0x0000000a, 0x0000000a, 0x0000000a, + 0x0000000a, 0x0000000a, 0x0000000a, 0x0000000a, + 0x0000000a, 0x0000000a, 0x0000000a, 0x00000000, + 0x0000000b, 0x00000008, 0x00000009, 0x0000000f, + 0x0000000f, 0x0000000f, 0x0000000f, 0x0000000f, + 0x0000000c, 0x0000000d, 0x00000001, 0x00000001, + 0x0000000e, 0x00000005, 0x00000002, 0x00000002, + 0x00000004, 0x00000003, 0x00000003, 0x00000003, + 0x00000003, 0x0000000a, 0x0000000a, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec027_data[] = { + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080634, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080730, 0x00080834, 0x0008082e, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00080020, 0x00080228, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00080224, + 0x00080326, 0x00080730, 0x00080932, 0x00080a34, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x0009061c, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x0009033a, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00090200, 0x00090304, 0x00090408, 0x0009050c, + 0x00090610, 0x00090714, 0x00090818, 0x0009121c, + 0x0009131e, 0x00000000, 0x00000000, 0x00000000, + 0x0009063d, 0x00090740, 0x000d803f, 0x000d413f, + 0x0009030c, 0x0009041c, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x0009013f, 0x00090840, 0x000dc93d, 0x000d093d, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a003c, 0x000a0037, 0x000ec139, 0x000e0139, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x000a0742, 0x00000000, 0x00000000, + 0x000a0d41, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x000a0036, + 0x000a0138, 0x00000000, 0x00000000, 0x00000000, + 0x000a0d3e, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000a0037, 0x000a0139, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec028_data[] = { + 0x00000006, 0x00000001, 0x00000004, 0x00000001, + 0x00000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000040, 0x00000001, 0x00000010, 0x00000001, + 0x00000000, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00500000, 0x00000001, 0x00700000, 0x00000001, + 0x00a00000, 0x00000001, 0x00b00000, 0x00000001, + 0x00200000, 0x00000001, 0x00000000, 0x00000001, + 0x00300000, 0x00000001, 0x00800000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec029_data[] = { + 0xfffffff0, 0xfffffff1, 0xfffffff3, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffff0f, 0xffffff0f, 0xffffff0f, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xff0fffff, 0xff0fffff, 0xff0fffff, 0xff0fffff, + 0xff0fffff, 0xff1fffff, 0xff0fffff, 0xff1fffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, +}; + +static u32 nbl_sec030_data[] = { + 0x00000000, 0x00000001, 0x00000002, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000001, 0x00000002, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000001, 0x00000001, 0x00000001, + 0x00000002, 0x00000003, 0x00000004, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec039_data[] = { + 0xfef80000, 0x00000002, 0x000002e0, 0x00000000, + 0xfef8013e, 0x00000002, 0x000002e0, 0x00000000, + 0x6660013e, 0x726e6802, 0x02224e42, 0x00000000, + 0x6660013e, 0x726e6802, 0x02224e42, 0x00000000, + 0x66600000, 0x726e6802, 0x02224e42, 0x00000000, + 0x66600000, 0x726e6802, 0x02224e42, 0x00000000, + 0x66600000, 0x00026802, 0x02224e40, 0x00000000, + 0x66627800, 0x00026802, 0x02224e40, 0x00000000, + 0x66600000, 0x00026a76, 0x02224e40, 0x00000000, + 0x66600000, 0x00026802, 0x00024e40, 0x00000000, + 0x66600000, 0x00026802, 0x00024e40, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec040_data[] = { + 0x0040fb3f, 0x00000001, 0x0440fb3f, 0x00000001, + 0x0502fa00, 0x00000001, 0x0602f900, 0x00000001, + 0x0903e600, 0x00000001, 0x0a03e500, 0x00000001, + 0x1101e600, 0x00000001, 0x1201e500, 0x00000001, + 0x0000ff00, 0x00000001, 0x0008ff07, 0x00000001, + 0x00ffff00, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec046_4p_data[] = { + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xa0000000, 0x00077c2b, 0x005c0000, + 0x00000000, 0x00008100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0x00073029, 0x00480000, + 0x00000000, 0x00008100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0x00073029, 0x00480000, + 0x70000000, 0x00000020, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xa0000000, 0x00000009, 0x00000000, + 0x00000000, 0x00002100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x00000009, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x70000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x70000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x38430000, + 0x70000006, 0x00000020, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x98cb1180, 0x6e36d469, + 0x9d8eb91c, 0x87e3ef47, 0xa2931288, 0x08405c5a, + 0x73865086, 0x00000080, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x000b3849, 0x38430000, + 0x00000006, 0x0000c100, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0xb0000000, 0x00133889, 0x08400000, + 0x03865086, 0x4c016100, 0x00000014, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec047_data[] = { + 0x2040dc3f, 0x00000001, 0x2000dcff, 0x00000001, + 0x2200dcff, 0x00000001, 0x0008dc01, 0x00000001, + 0x0001de00, 0x00000001, 0x2900c4ff, 0x00000001, + 0x3100c4ff, 0x00000001, 0x2b00c4ff, 0x00000001, + 0x3300c4ff, 0x00000001, 0x2700d8ff, 0x00000001, + 0x2300d8ff, 0x00000001, 0x2502d800, 0x00000001, + 0x2102d800, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec052_data[] = { + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x30000000, 0x000b844c, 0xc8580000, + 0x00000006, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0xb0d3668b, 0xb0555e12, + 0x03b055c6, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0xa64b3449, 0x405a3cc1, + 0x00000006, 0x3d2d3300, 0x00000010, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x20000000, 0x26473429, 0x00482cc1, + 0x00000000, 0x00ccd300, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec053_data[] = { + 0x0840f03f, 0x00000001, 0x0040f03f, 0x00000001, + 0x0140fa3f, 0x00000001, 0x0100fa0f, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec058_data[] = { + 0x00000000, 0x00000000, 0x59f89400, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00470000, + 0x00000000, 0x3c000000, 0xa2e40006, 0x00000017, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x19fa1400, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x28440000, + 0x038e5186, 0x3c000000, 0xa8e40012, 0x00000047, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x0001f3d0, 0x00000000, + 0x00000000, 0xb0000000, 0x00133889, 0x38c30000, + 0x0000000a, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x0001f3d0, 0x00000000, + 0x00000000, 0xb0000000, 0x00133889, 0x38c30000, + 0x0000000a, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x000113d0, 0x00000000, + 0x00000000, 0xb0000000, 0x00073829, 0x00430000, + 0x00000000, 0x3c000000, 0x0000000a, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x000293d0, 0x00000000, + 0x00000000, 0xb0000000, 0x00133889, 0x08400000, + 0x03865086, 0x3c000000, 0x00000016, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec059_data[] = { + 0x0200e4ff, 0x00000001, 0x0400e2ff, 0x00000001, + 0x1300ecff, 0x00000001, 0x1500eaff, 0x00000001, + 0x0300e4ff, 0x00000001, 0x0500e2ff, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec062_data[] = { + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x90939899, 0x88809c9b, 0x0000013d, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec063_data[] = { + 0x0500e2ff, 0x00000001, 0x0900e2ff, 0x00000001, + 0x1900e2ff, 0x00000001, 0x1100e2ff, 0x00000001, + 0x0100e2ff, 0x00000001, 0x0600e1ff, 0x00000001, + 0x0a00e1ff, 0x00000001, 0x1a00e1ff, 0x00000001, + 0x1200e1ff, 0x00000001, 0x0200e1ff, 0x00000001, + 0x0000fcff, 0x00000001, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec065_data[] = { + 0x006e120c, 0x006e1210, 0x006e4208, 0x006e4218, + 0x00200b02, 0x00200b00, 0x000e1900, 0x000e1906, + 0x00580208, 0x00580204, 0x004c0208, 0x004c0207, + 0x0002110c, 0x0002110c, 0x0012010c, 0x00100110, + 0x0010010c, 0x000a010c, 0x0008010c, 0x00060000, + 0x00160000, 0x00140000, 0x001e0000, 0x001e0000, + 0x001e0000, 0x001e0000, 0x001e0000, 0x001e0000, + 0x001e0000, 0x001e0000, 0x001e0000, 0x001e0000, +}; + +static u32 nbl_sec066_data[] = { + 0x006e120c, 0x006e1210, 0x006e4208, 0x006e4218, + 0x00200b02, 0x00200b00, 0x000e1900, 0x000e1906, + 0x00580208, 0x00580204, 0x004c0208, 0x004c0207, + 0x0002110c, 0x0002110c, 0x0012010c, 0x00100110, + 0x0010010c, 0x000a010c, 0x0008010c, 0x00060000, + 0x00160000, 0x00140000, 0x001e0000, 0x001e0000, + 0x001e0000, 0x001e0000, 0x001e0000, 0x001e0000, + 0x001e0000, 0x001e0000, 0x001e0000, 0x001e0000, +}; + +static u32 nbl_sec071_4p_data[] = { + 0x00000000, 0x00000000, 0x00113d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7029b00, 0x00000000, + 0x00000000, 0x43000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x51e00000, 0x00000c9c, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00293d00, 0x00000000, + 0x00000000, 0x00000000, 0x67089b00, 0x00000002, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x80000000, 0x00000000, 0xb1e00000, 0x0000189c, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00213d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7069b00, 0x00000001, + 0x00000000, 0x43000000, 0x014b0c70, 0x00000000, + 0x00000000, 0x00000000, 0x92600000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00213d00, 0x00000000, + 0x00000000, 0x00000000, 0xe7069b00, 0x00000001, + 0x00000000, 0x43000000, 0x015b0c70, 0x00000000, + 0x00000000, 0x00000000, 0x92600000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00553d00, 0x00000000, + 0x00000000, 0x00000000, 0xe6d29a00, 0x000149c4, + 0x00000000, 0x4b000000, 0x00000004, 0x00000000, + 0x80000000, 0x00022200, 0x62600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00553d00, 0x00000000, + 0x00000000, 0x00000000, 0xe6d2c000, 0x000149c4, + 0x00000000, 0x5b000000, 0x00000004, 0x00000000, + 0x80000000, 0x00022200, 0x62600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x006d3d00, 0x00000000, + 0x00000000, 0x00000000, 0x64d49200, 0x5e556945, + 0xc666d89a, 0x4b0001a9, 0x00004c84, 0x00000000, + 0x80000000, 0x00022200, 0xc2600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x006d3d00, 0x00000000, + 0x00000000, 0x00000000, 0x6ed4ba00, 0x5ef56bc5, + 0xc666d8c0, 0x5b0001a9, 0x00004dc4, 0x00000000, + 0x80000000, 0x00022200, 0xc2600000, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000002, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00700000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec072_data[] = { + 0x84006aff, 0x00000001, 0x880066ff, 0x00000001, + 0x140040ff, 0x00000001, 0x70000cff, 0x00000001, + 0x180040ff, 0x00000001, 0x30000cff, 0x00000001, + 0x10004cff, 0x00000001, 0x30004cff, 0x00000001, + 0x0100ecff, 0x00000001, 0x0300ecff, 0x00000001, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec116_data[] = { + 0x00000000, 0x00000000, 0x3fff8000, 0x00000007, + 0x3fff8000, 0x00000007, 0x3fff8000, 0x00000007, + 0x3fff8000, 0x00000003, 0x3fff8000, 0x00000003, + 0x3fff8000, 0x00000007, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec124_data[] = { + 0xfffffffc, 0xffffffff, 0x00300000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000500, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x00300010, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000500, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x00300010, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000500, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x00300fff, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000580, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x00301fff, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000580, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x0030ffff, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000580, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x0030ffff, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000580, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x0030ffff, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000580, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x0030ffff, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000580, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0xffffffff, 0x00300000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000500, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x0000fffe, 0x00000000, 0x00300000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000480, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0x00ffffff, 0x00300000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000480, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffe, 0x0000000f, 0x00300000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000580, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec125_data[] = { + 0xfffffffc, 0x01ffffff, 0x00300000, 0x70000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000480, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffe, 0x00000001, 0x00300000, 0x70000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000540, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffe, 0x011003ff, 0x00300000, 0x70000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000005c0, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0x103fffff, 0x00300001, 0x70000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000480, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec126_data[] = { + 0xfffffffc, 0xffffffff, 0x00300001, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000500, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffe, 0x000001ff, 0x00300000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x000005c0, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00002013, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000400, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00002013, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000400, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffc, 0x01ffffff, 0x00300000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000480, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfffffffe, 0x00000001, 0x00300000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000540, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, +}; + +static u32 nbl_sec137_data[] = { + 0x0000017a, 0x000000f2, 0x00000076, 0x0000017a, + 0x0000017a, 0x00000080, 0x00000024, 0x0000017a, + 0x0000017a, 0x00000191, 0x00000035, 0x0000017a, + 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a, + 0x0000017a, 0x000000d2, 0x00000066, 0x0000017a, + 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a, + 0x0000017a, 0x000000f2, 0x00000076, 0x0000017a, + 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a, +}; + +static u32 nbl_sec138_data[] = { + 0x0000017a, 0x000000f2, 0x00000076, 0x0000017a, + 0x0000017a, 0x00000080, 0x00000024, 0x0000017a, + 0x0000017a, 0x00000191, 0x00000035, 0x0000017a, + 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a, + 0x0000017a, 0x000000d2, 0x00000066, 0x0000017a, + 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a, + 0x0000017a, 0x000000f2, 0x00000076, 0x0000017a, + 0x0000017a, 0x0000017a, 0x0000017a, 0x0000017a, +}; + +void nbl_write_all_regs(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + u32 *nbl_sec046_data; + u32 *nbl_sec071_data; + u8 eth_mode = NBL_COMMON_TO_ETH_MODE(common); + u32 i = 0; + + switch (eth_mode) { + case 1: + nbl_sec046_data = nbl_sec046_1p_data; + nbl_sec071_data = nbl_sec071_1p_data; + break; + case 2: + nbl_sec046_data = nbl_sec046_2p_data; + nbl_sec071_data = nbl_sec071_2p_data; + break; + case 4: + nbl_sec046_data = nbl_sec046_4p_data; + nbl_sec071_data = nbl_sec071_4p_data; + break; + default: + nbl_sec046_data = nbl_sec046_2p_data; + nbl_sec071_data = nbl_sec071_2p_data; + } + + for (i = 0; i < NBL_SEC006_SIZE; i++) { + if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0) + nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + nbl_hw_wr32(hw_mgt, NBL_SEC006_REGI(i), nbl_sec006_data[i]); + } + + for (i = 0; i < NBL_SEC007_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC007_REGI(i), nbl_sec007_data[i]); + + for (i = 0; i < NBL_SEC008_SIZE; i++) { + if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0) + nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + nbl_hw_wr32(hw_mgt, NBL_SEC008_REGI(i), nbl_sec008_data[i]); + } + + for (i = 0; i < NBL_SEC009_SIZE; i++) { + if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0) + nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + nbl_hw_wr32(hw_mgt, NBL_SEC009_REGI(i), nbl_sec009_data[i]); + } + + for (i = 0; i < NBL_SEC010_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC010_REGI(i), nbl_sec010_data[i]); + + for (i = 0; i < NBL_SEC011_SIZE; i++) { + if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0) + nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + nbl_hw_wr32(hw_mgt, NBL_SEC011_REGI(i), nbl_sec011_data[i]); + } + + for (i = 0; i < NBL_SEC012_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC012_REGI(i), nbl_sec012_data[i]); + + for (i = 0; i < NBL_SEC013_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC013_REGI(i), nbl_sec013_data[i]); + + for (i = 0; i < NBL_SEC014_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC014_REGI(i), nbl_sec014_data[i]); + + for (i = 0; i < NBL_SEC022_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC022_REGI(i), nbl_sec022_data[i]); + + for (i = 0; i < NBL_SEC023_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC023_REGI(i), nbl_sec023_data[i]); + + for (i = 0; i < NBL_SEC024_SIZE; i++) { + if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0) + nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + nbl_hw_wr32(hw_mgt, NBL_SEC024_REGI(i), nbl_sec024_data[i]); + } + + for (i = 0; i < NBL_SEC025_SIZE; i++) { + if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0) + nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + nbl_hw_wr32(hw_mgt, NBL_SEC025_REGI(i), nbl_sec025_data[i]); + } + + for (i = 0; i < NBL_SEC026_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC026_REGI(i), nbl_sec026_data[i]); + + for (i = 0; i < NBL_SEC027_SIZE; i++) { + if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0) + nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + nbl_hw_wr32(hw_mgt, NBL_SEC027_REGI(i), nbl_sec027_data[i]); + } + + for (i = 0; i < NBL_SEC028_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC028_REGI(i), nbl_sec028_data[i]); + + for (i = 0; i < NBL_SEC029_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC029_REGI(i), nbl_sec029_data[i]); + + for (i = 0; i < NBL_SEC030_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC030_REGI(i), nbl_sec030_data[i]); + + for (i = 0; i < NBL_SEC039_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC039_REGI(i), nbl_sec039_data[i]); + + for (i = 0; i < NBL_SEC040_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC040_REGI(i), nbl_sec040_data[i]); + + for (i = 0; i < NBL_SEC046_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC046_REGI(i), nbl_sec046_data[i]); + + for (i = 0; i < NBL_SEC047_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC047_REGI(i), nbl_sec047_data[i]); + + for (i = 0; i < NBL_SEC052_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC052_REGI(i), nbl_sec052_data[i]); + + for (i = 0; i < NBL_SEC053_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC053_REGI(i), nbl_sec053_data[i]); + + for (i = 0; i < NBL_SEC058_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC058_REGI(i), nbl_sec058_data[i]); + + for (i = 0; i < NBL_SEC059_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC059_REGI(i), nbl_sec059_data[i]); + + for (i = 0; i < NBL_SEC062_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC062_REGI(i), nbl_sec062_data[i]); + + for (i = 0; i < NBL_SEC063_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC063_REGI(i), nbl_sec063_data[i]); + + for (i = 0; i < NBL_SEC065_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC065_REGI(i), nbl_sec065_data[i]); + + for (i = 0; i < NBL_SEC066_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC066_REGI(i), nbl_sec066_data[i]); + + for (i = 0; i < NBL_SEC071_SIZE; i++) { + if ((i + 1) % NBL_SEC_BLOCK_SIZE == 0) + nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + nbl_hw_wr32(hw_mgt, NBL_SEC071_REGI(i), nbl_sec071_data[i]); + } + + for (i = 0; i < NBL_SEC072_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC072_REGI(i), nbl_sec072_data[i]); + + for (i = 0; i < NBL_SEC116_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC116_REGI(i), nbl_sec116_data[i]); + + for (i = 0; i < NBL_SEC124_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC124_REGI(i), nbl_sec124_data[i]); + + for (i = 0; i < NBL_SEC125_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC125_REGI(i), nbl_sec125_data[i]); + + for (i = 0; i < NBL_SEC126_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC126_REGI(i), nbl_sec126_data[i]); + + for (i = 0; i < NBL_SEC137_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC137_REGI(i), nbl_sec137_data[i]); + + for (i = 0; i < NBL_SEC138_SIZE; i++) + nbl_hw_wr32(hw_mgt, NBL_SEC138_REGI(i), nbl_sec138_data[i]); + + nbl_hw_wr32(hw_mgt, NBL_SEC000_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC001_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC002_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC003_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC004_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC005_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC015_ADDR, 0x000f0908); + nbl_hw_wr32(hw_mgt, NBL_SEC016_ADDR, 0x10110607); + nbl_hw_wr32(hw_mgt, NBL_SEC017_ADDR, 0x383a3032); + nbl_hw_wr32(hw_mgt, NBL_SEC018_ADDR, 0x0201453f); + nbl_hw_wr32(hw_mgt, NBL_SEC019_ADDR, 0x00000a41); + nbl_hw_wr32(hw_mgt, NBL_SEC020_ADDR, 0x000000c8); + nbl_hw_wr32(hw_mgt, NBL_SEC021_ADDR, 0x00000400); + nbl_hw_wr32(hw_mgt, NBL_SEC031_ADDR, 0x000f0908); + nbl_hw_wr32(hw_mgt, NBL_SEC032_ADDR, 0x00001011); + nbl_hw_wr32(hw_mgt, NBL_SEC033_ADDR, 0x00003032); + nbl_hw_wr32(hw_mgt, NBL_SEC034_ADDR, 0x0201003f); + nbl_hw_wr32(hw_mgt, NBL_SEC035_ADDR, 0x0000000a); + nbl_hw_wr32(hw_mgt, NBL_SEC036_ADDR, 0x00001701); + nbl_hw_wr32(hw_mgt, NBL_SEC037_ADDR, 0x009238a1); + nbl_hw_wr32(hw_mgt, NBL_SEC038_ADDR, 0x0000002e); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(0), 0x00000200); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(1), 0x00000300); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(2), 0x00000105); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(3), 0x00000106); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(4), 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(5), 0x0000000a); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(6), 0x00000041); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(7), 0x00000082); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(8), 0x00000020); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(9), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(10), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(11), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(12), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(13), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(14), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC041_REGI(15), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC042_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC043_ADDR, 0x00000002); + nbl_hw_wr32(hw_mgt, NBL_SEC044_ADDR, 0x28212000); + nbl_hw_wr32(hw_mgt, NBL_SEC045_ADDR, 0x00002b29); + nbl_hw_wr32(hw_mgt, NBL_SEC048_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC049_ADDR, 0x00000002); + nbl_hw_wr32(hw_mgt, NBL_SEC050_ADDR, 0x352b2000); + nbl_hw_wr32(hw_mgt, NBL_SEC051_ADDR, 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC054_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC055_ADDR, 0x00000002); + nbl_hw_wr32(hw_mgt, NBL_SEC056_ADDR, 0x2b222100); + nbl_hw_wr32(hw_mgt, NBL_SEC057_ADDR, 0x00000038); + nbl_hw_wr32(hw_mgt, NBL_SEC060_ADDR, 0x24232221); + nbl_hw_wr32(hw_mgt, NBL_SEC061_ADDR, 0x0000002e); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(0), 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(1), 0x00000005); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(2), 0x00000011); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(3), 0x00000005); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(4), 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(5), 0x0000000a); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(6), 0x00000006); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(7), 0x00000012); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(8), 0x00000006); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(9), 0x00000002); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(10), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(11), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(12), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(13), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(14), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC064_REGI(15), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC067_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC068_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC069_ADDR, 0x22212000); + nbl_hw_wr32(hw_mgt, NBL_SEC070_ADDR, 0x3835322b); + nbl_hw_wr32(hw_mgt, NBL_SEC073_ADDR, 0x0316a5ff); + nbl_hw_wr32(hw_mgt, NBL_SEC074_ADDR, 0x0316a5ff); + nbl_hw_wr32(hw_mgt, NBL_SEC075_REGI(0), 0x08802080); + nbl_hw_wr32(hw_mgt, NBL_SEC075_REGI(1), 0x12a05080); + nbl_hw_wr32(hw_mgt, NBL_SEC075_REGI(2), 0xffffffff); + nbl_hw_wr32(hw_mgt, NBL_SEC075_REGI(3), 0xffffffff); + nbl_hw_wr32(hw_mgt, NBL_SEC076_REGI(0), 0x08802080); + nbl_hw_wr32(hw_mgt, NBL_SEC076_REGI(1), 0x12a05080); + nbl_hw_wr32(hw_mgt, NBL_SEC076_REGI(2), 0xffffffff); + nbl_hw_wr32(hw_mgt, NBL_SEC076_REGI(3), 0xffffffff); + nbl_hw_wr32(hw_mgt, NBL_SEC077_REGI(0), 0x08802080); + nbl_hw_wr32(hw_mgt, NBL_SEC077_REGI(1), 0x12a05080); + nbl_hw_wr32(hw_mgt, NBL_SEC077_REGI(2), 0xffffffff); + nbl_hw_wr32(hw_mgt, NBL_SEC077_REGI(3), 0xffffffff); + nbl_hw_wr32(hw_mgt, NBL_SEC078_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC079_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC080_ADDR, 0x0014a248); + nbl_hw_wr32(hw_mgt, NBL_SEC081_ADDR, 0x00000d33); + nbl_hw_wr32(hw_mgt, NBL_SEC082_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC083_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC084_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC085_ADDR, 0x000144d2); + nbl_hw_wr32(hw_mgt, NBL_SEC086_ADDR, 0x31322e2f); + nbl_hw_wr32(hw_mgt, NBL_SEC087_ADDR, 0x0a092d2c); + nbl_hw_wr32(hw_mgt, NBL_SEC088_ADDR, 0x33050804); + nbl_hw_wr32(hw_mgt, NBL_SEC089_ADDR, 0x14131535); + nbl_hw_wr32(hw_mgt, NBL_SEC090_ADDR, 0x0000000a); + nbl_hw_wr32(hw_mgt, NBL_SEC091_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC092_ADDR, 0x00000008); + nbl_hw_wr32(hw_mgt, NBL_SEC093_ADDR, 0x0000000e); + nbl_hw_wr32(hw_mgt, NBL_SEC094_ADDR, 0x0000000f); + nbl_hw_wr32(hw_mgt, NBL_SEC095_ADDR, 0x00000015); + nbl_hw_wr32(hw_mgt, NBL_SEC096_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC097_ADDR, 0x0000000a); + nbl_hw_wr32(hw_mgt, NBL_SEC098_ADDR, 0x00000008); + nbl_hw_wr32(hw_mgt, NBL_SEC099_ADDR, 0x00000011); + nbl_hw_wr32(hw_mgt, NBL_SEC100_ADDR, 0x00000013); + nbl_hw_wr32(hw_mgt, NBL_SEC101_ADDR, 0x00000014); + nbl_hw_wr32(hw_mgt, NBL_SEC102_ADDR, 0x00000010); + nbl_hw_wr32(hw_mgt, NBL_SEC103_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC104_ADDR, 0x0000004d); + nbl_hw_wr32(hw_mgt, NBL_SEC105_ADDR, 0x08020a09); + nbl_hw_wr32(hw_mgt, NBL_SEC106_ADDR, 0x00000005); + nbl_hw_wr32(hw_mgt, NBL_SEC107_ADDR, 0x00000006); + nbl_hw_wr32(hw_mgt, NBL_SEC108_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC109_ADDR, 0x00110a09); + nbl_hw_wr32(hw_mgt, NBL_SEC110_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC111_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC112_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC113_ADDR, 0x0000000a); + nbl_hw_wr32(hw_mgt, NBL_SEC114_ADDR, 0x0000000a); + nbl_hw_wr32(hw_mgt, NBL_SEC115_ADDR, 0x00000009); + nbl_hw_wr32(hw_mgt, NBL_SEC117_ADDR, 0x0000000a); + nbl_hw_wr32(hw_mgt, NBL_SEC118_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(0), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(1), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(2), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(3), 0x00000000); + nbl_hw_wr32(hw_mgt, NBL_SEC119_REGI(4), 0x00000100); + nbl_hw_wr32(hw_mgt, NBL_SEC120_ADDR, 0x0000003c); + nbl_hw_wr32(hw_mgt, NBL_SEC121_ADDR, 0x00000003); + nbl_hw_wr32(hw_mgt, NBL_SEC122_ADDR, 0x000000bc); + nbl_hw_wr32(hw_mgt, NBL_SEC123_ADDR, 0x0000023b); + nbl_hw_wr32(hw_mgt, NBL_SEC127_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC128_ADDR, 0x00000001); + nbl_hw_wr32(hw_mgt, NBL_SEC129_ADDR, 0x00000002); + nbl_hw_wr32(hw_mgt, NBL_SEC130_ADDR, 0x00000002); + nbl_hw_wr32(hw_mgt, NBL_SEC131_ADDR, 0x00000003); + nbl_hw_wr32(hw_mgt, NBL_SEC132_ADDR, 0x00000003); + nbl_hw_wr32(hw_mgt, NBL_SEC133_ADDR, 0x00000004); + nbl_hw_wr32(hw_mgt, NBL_SEC134_ADDR, 0x00000004); + nbl_hw_wr32(hw_mgt, NBL_SEC135_ADDR, 0x0000000e); + nbl_hw_wr32(hw_mgt, NBL_SEC136_ADDR, 0x0000000e); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h new file mode 100644 index 000000000000..187f7557cc9e --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_HW_LEONIS_REGS_H_ +#define _NBL_HW_LEONIS_REGS_H_ + +void nbl_write_all_regs(void *priv); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 05/15] net/nebula-matrix: add channel layer definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (3 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 04/15] net/nebula-matrix: add machine-generated headers and chip definitions illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 06/15] net/nebula-matrix: add resource " illusion.wang ` (10 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list a channel management layer provides structured approach to handle communication between different components and drivers. Here's a summary of its key functionalities: 1. Message Handling Framework Message Registration/Unregistration: Functions (nbl_chan_register_msg, nbl_chan_unregister_msg) allow dynamic registration of message handlers for specific message types, enabling extensible communication protocols. Message Sending/Acknowledgment: Core functions (nbl_chan_send_msg, nbl_chan_send_ack) handle message transmission, including asynchronous operations with acknowledgment (ACK) support. Received ACKs are processed via nbl_chan_recv_ack_msg. Hash-Based Handler Lookup: A hash table (handle_hash_tbl) stores message handlers for efficient O(1) lookup by message type. 2. Channel Types and Queue Management Dual Channel Support: The driver supports two channel types: Mailbox Channel: For direct communication between PF and VF. Admin Queue (AdminQ): For privileged operations requiring kernel-level access (e.g., configuration). Queue Initialization/Teardown: Functions (nbl_chan_init_queue, nbl_chan_teardown_queue) manage transmit (TX) and receive (RX) queues, including DMA buffer allocation/deallocation (dmam_alloc_coherent, dmam_free_coherent). Queue Configuration: Hardware-specific queue parameters (e.g., buffer sizes, entry counts) are set via nbl_chan_config_queue, with hardware interactions delegated to hw_ops. 3. Hardware Abstraction Layer (HW Ops) Hardware-Specific Operations: The nbl_hw_ops structure abstracts hardware interactions, allowing different chip variants to implement their own queue configuration (config_mailbox_txq/rxq, config_adminq_txq/rxq), tail pointer updates (update_mailbox_queue_tail_ptr), and DMA error checks (check_mailbox_dma_err, check_adminq_dma_err). 4. Keepalive Mechanism Heartbeat Monitoring: A keepalive system (nbl_chan_setup_keepalive, nbl_chan_keepalive) ensures connectivity between drivers by periodically sending heartbeat messages (NBL_CHAN_MSG_KEEP_ALIVE). Adjusts timeouts dynamically based on success/failure rates. 5. Error Handling and Recovery DMA Error Detection: Functions like nbl_chan_check_dma_err detect hardware-level errors during TX/RX operations, triggering queue resets (nbl_chan_reset_queue) if needed. Retry Logic: Message sending includes retry mechanisms (resend_times) for transient failures (e.g., ACK timeouts). 6. Asynchronous Task Support Delayed Work Queues: Uses Linux kernel delayed work (delayed_work) for background tasks like keepalive checks (nbl_chan_keepalive) and queue cleanup (nbl_chan_clean_queue_subtask). 7. Initialization and Cleanup Modular Setup: The nbl_chan_init_common function initializes the channel management layer, including memory allocation for channel structures (nbl_channel_mgt_leonis), message handlers, and hardware operations tables (nbl_channel_ops_tbl). Resource Cleanup: Corresponding nbl_chan_remove_common ensures all allocated resources (memory, workqueues, handlers) are freed during driver unloading.: Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 4 +- .../nbl/nbl_channel/nbl_channel.c | 1482 +++++++++++++++++ .../nbl/nbl_channel/nbl_channel.h | 205 +++ .../nebula-matrix/nbl/nbl_common/nbl_common.c | 784 +++++++++ .../nebula-matrix/nbl/nbl_common/nbl_common.h | 54 + .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 5 + .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 259 +++ .../nbl/nbl_include/nbl_def_channel.h | 715 ++++++++ .../nbl/nbl_include/nbl_def_common.h | 187 +++ .../nbl/nbl_include/nbl_def_hw.h | 27 + .../nbl/nbl_include/nbl_include.h | 67 + .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 10 +- 12 files changed, 3796 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index f5c1f8030beb..db04128977d5 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -4,7 +4,9 @@ obj-$(CONFIG_NBL_CORE) := nbl_core.o -nbl_core-objs += nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \ +nbl_core-objs += nbl_common/nbl_common.o \ + nbl_channel/nbl_channel.o \ + nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \ nbl_main.o diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c new file mode 100644 index 000000000000..f9c7fea7d13c --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.c @@ -0,0 +1,1482 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +#include <linux/delay.h> +#include "nbl_channel.h" + +static int nbl_chan_send_ack(void *priv, struct nbl_chan_ack_info *chan_ack); + +static void nbl_chan_delete_msg_handler(struct nbl_channel_mgt *chan_mgt, + u16 msg_type) +{ + struct nbl_chan_info *chan_info; + u8 chan_type; + + nbl_common_free_hash_node(chan_mgt->handle_hash_tbl, &msg_type); + + if (msg_type < NBL_CHAN_MSG_ADMINQ_GET_EMP_VERSION) + chan_type = NBL_CHAN_TYPE_MAILBOX; + else + chan_type = NBL_CHAN_TYPE_ADMINQ; + + chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + if (chan_info && chan_info->clean_task) + nbl_common_flush_task(chan_info->clean_task); +} + +static int nbl_chan_add_msg_handler(struct nbl_channel_mgt *chan_mgt, + u16 msg_type, nbl_chan_resp func, + void *priv) +{ + struct nbl_chan_msg_node_data handler = { 0 }; + int ret; + + handler.func = func; + handler.priv = priv; + + ret = nbl_common_alloc_hash_node(chan_mgt->handle_hash_tbl, &msg_type, + &handler, NULL); + + return ret; +} + +static int nbl_chan_init_msg_handler(struct nbl_channel_mgt *chan_mgt) +{ + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + struct nbl_hash_tbl_key tbl_key; + int ret = 0; + + NBL_HASH_TBL_KEY_INIT(&tbl_key, NBL_COMMON_TO_DEV(common), sizeof(u16), + sizeof(struct nbl_chan_msg_node_data), + NBL_CHAN_HANDLER_TBL_BUCKET_SIZE, false); + + chan_mgt->handle_hash_tbl = nbl_common_init_hash_table(&tbl_key); + if (!chan_mgt->handle_hash_tbl) { + ret = -ENOMEM; + goto alloc_hashtbl_failed; + } + + return 0; + +alloc_hashtbl_failed: + return ret; +} + +static void nbl_chan_remove_msg_handler(struct nbl_channel_mgt *chan_mgt) +{ + nbl_common_remove_hash_table(chan_mgt->handle_hash_tbl, NULL); + + chan_mgt->handle_hash_tbl = NULL; +} + +static bool nbl_chan_is_admiq(struct nbl_chan_info *chan_info) +{ + return chan_info->chan_type == NBL_CHAN_TYPE_ADMINQ; +} + +static void nbl_chan_init_queue_param(struct nbl_chan_info *chan_info, + u16 num_txq_entries, u16 num_rxq_entries, + u16 txq_buf_size, u16 rxq_buf_size) +{ + spin_lock_init(&chan_info->txq_lock); + chan_info->num_txq_entries = num_txq_entries; + chan_info->num_rxq_entries = num_rxq_entries; + chan_info->txq_buf_size = txq_buf_size; + chan_info->rxq_buf_size = rxq_buf_size; +} + +static int nbl_chan_init_tx_queue(struct nbl_common_info *common, + struct nbl_chan_info *chan_info) +{ + struct device *dev = NBL_COMMON_TO_DEV(common); + struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(common); + struct nbl_chan_ring *txq = &chan_info->txq; + size_t size = + chan_info->num_txq_entries * sizeof(struct nbl_chan_tx_desc); + + txq->desc = dmam_alloc_coherent(dma_dev, size, &txq->dma, + GFP_KERNEL | __GFP_ZERO); + if (!txq->desc) + return -ENOMEM; + + chan_info->wait = devm_kcalloc(dev, chan_info->num_txq_entries, + sizeof(struct nbl_chan_waitqueue_head), + GFP_KERNEL); + if (!chan_info->wait) + goto req_wait_queue_failed; + + txq->buf = devm_kcalloc(dev, chan_info->num_txq_entries, + sizeof(struct nbl_chan_buf), GFP_KERNEL); + if (!txq->buf) + goto req_num_txq_entries; + + return 0; + +req_num_txq_entries: + devm_kfree(dev, chan_info->wait); +req_wait_queue_failed: + dmam_free_coherent(dma_dev, size, txq->desc, txq->dma); + + txq->desc = NULL; + txq->dma = 0; + chan_info->wait = NULL; + return -ENOMEM; +} + +static int nbl_chan_init_rx_queue(struct nbl_common_info *common, + struct nbl_chan_info *chan_info) +{ + struct device *dev = NBL_COMMON_TO_DEV(common); + struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(common); + struct nbl_chan_ring *rxq = &chan_info->rxq; + size_t size = + chan_info->num_rxq_entries * sizeof(struct nbl_chan_rx_desc); + + rxq->desc = dmam_alloc_coherent(dma_dev, size, &rxq->dma, + GFP_KERNEL | __GFP_ZERO); + if (!rxq->desc) { + dev_err(dev, + "Allocate DMA for chan rx descriptor ring failed\n"); + return -ENOMEM; + } + + rxq->buf = devm_kcalloc(dev, chan_info->num_rxq_entries, + sizeof(struct nbl_chan_buf), GFP_KERNEL); + if (!rxq->buf) { + dmam_free_coherent(dma_dev, size, rxq->desc, rxq->dma); + rxq->desc = NULL; + rxq->dma = 0; + return -ENOMEM; + } + + return 0; +} + +static void nbl_chan_remove_tx_queue(struct nbl_common_info *common, + struct nbl_chan_info *chan_info) +{ + struct device *dev = NBL_COMMON_TO_DEV(common); + struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(common); + struct nbl_chan_ring *txq = &chan_info->txq; + size_t size = + chan_info->num_txq_entries * sizeof(struct nbl_chan_tx_desc); + + devm_kfree(dev, txq->buf); + txq->buf = NULL; + + devm_kfree(dev, chan_info->wait); + chan_info->wait = NULL; + + dmam_free_coherent(dma_dev, size, txq->desc, txq->dma); + txq->desc = NULL; + txq->dma = 0; +} + +static void nbl_chan_remove_rx_queue(struct nbl_common_info *common, + struct nbl_chan_info *chan_info) +{ + struct device *dev = NBL_COMMON_TO_DEV(common); + struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(common); + struct nbl_chan_ring *rxq = &chan_info->rxq; + size_t size = + chan_info->num_rxq_entries * sizeof(struct nbl_chan_rx_desc); + + devm_kfree(dev, rxq->buf); + rxq->buf = NULL; + + dmam_free_coherent(dma_dev, size, rxq->desc, rxq->dma); + rxq->desc = NULL; + rxq->dma = 0; +} + +static int nbl_chan_init_queue(struct nbl_common_info *common, + struct nbl_chan_info *chan_info) +{ + int err; + + err = nbl_chan_init_tx_queue(common, chan_info); + if (err) + return err; + + err = nbl_chan_init_rx_queue(common, chan_info); + if (err) + goto setup_rx_queue_err; + + return 0; + +setup_rx_queue_err: + nbl_chan_remove_tx_queue(common, chan_info); + return err; +} + +static void nbl_chan_config_queue(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info, bool tx) +{ + struct nbl_hw_ops *hw_ops; + struct nbl_chan_ring *ring; + dma_addr_t dma_addr; + void *p = NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt); + int size_bwid = ilog2(chan_info->num_rxq_entries); + + hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + + if (tx) + ring = &chan_info->txq; + else + ring = &chan_info->rxq; + + dma_addr = ring->dma; + + if (nbl_chan_is_admiq(chan_info)) { + if (tx) + hw_ops->config_adminq_txq(p, dma_addr, size_bwid); + else + hw_ops->config_adminq_rxq(p, dma_addr, size_bwid); + } else { + if (tx) + hw_ops->config_mailbox_txq(p, dma_addr, size_bwid); + else + hw_ops->config_mailbox_rxq(p, dma_addr, size_bwid); + } +} + +static int nbl_chan_alloc_all_tx_bufs(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(chan_mgt->common); + struct device *dev = NBL_COMMON_TO_DEV(chan_mgt->common); + struct nbl_chan_ring *txq = &chan_info->txq; + struct nbl_chan_buf *buf; + u16 i; + + for (i = 0; i < chan_info->num_txq_entries; i++) { + buf = &txq->buf[i]; + buf->va = dmam_alloc_coherent(dma_dev, chan_info->txq_buf_size, + &buf->pa, + GFP_KERNEL | __GFP_ZERO); + if (!buf->va) { + dev_err(dev, + "Allocate buffer for chan tx queue failed\n"); + goto err; + } + } + + txq->next_to_clean = 0; + txq->next_to_use = 0; + txq->tail_ptr = 0; + + return 0; +err: + while (i--) { + buf = &txq->buf[i]; + dmam_free_coherent(dma_dev, chan_info->txq_buf_size, buf->va, + buf->pa); + buf->va = NULL; + buf->pa = 0; + } + + return -ENOMEM; +} + +static int +nbl_chan_cfg_mailbox_qinfo_map_table(struct nbl_channel_mgt *chan_mgt) +{ + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + struct nbl_hw_ops *hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + void *p = NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt); + u16 func_id; + u32 pf_mask; + + pf_mask = hw_ops->get_host_pf_mask(p); + for (func_id = 0; func_id < NBL_MAX_PF; func_id++) { + if (!(pf_mask & (1 << func_id))) + hw_ops->cfg_mailbox_qinfo(p, func_id, + common->hw_bus, + common->devid, + common->function + func_id); + } + + return 0; +} + +static int nbl_chan_cfg_adminq_qinfo_map_table(struct nbl_channel_mgt *chan_mgt) +{ + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + struct nbl_hw_ops *hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + + hw_ops->cfg_adminq_qinfo(NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt), + common->hw_bus, common->devid, + NBL_COMMON_TO_PCI_FUNC_ID(common)); + + return 0; +} + +static int nbl_chan_cfg_qinfo_map_table(void *priv, u8 chan_type) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + int err; + + if (!nbl_chan_is_admiq(chan_info)) + err = nbl_chan_cfg_mailbox_qinfo_map_table(chan_mgt); + else + err = nbl_chan_cfg_adminq_qinfo_map_table(chan_mgt); + + return err; +} + +static void nbl_chan_free_all_tx_bufs(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(chan_mgt->common); + struct nbl_chan_ring *txq = &chan_info->txq; + struct nbl_chan_buf *buf; + u16 i; + + for (i = 0; i < chan_info->num_txq_entries; i++) { + buf = &txq->buf[i]; + dmam_free_coherent(dma_dev, chan_info->txq_buf_size, buf->va, + buf->pa); + buf->va = NULL; + buf->pa = 0; + } +} + +#define NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, tail_ptr, qid)\ +do { \ + typeof(hw_ops) _hw_ops = (hw_ops); \ + typeof(chan_mgt) _chan_mgt = (chan_mgt); \ + typeof(tail_ptr) _tail_ptr = (tail_ptr); \ + typeof(qid) _qid = (qid); \ + if (nbl_chan_is_admiq(chan_info)) \ + (_hw_ops)->update_adminq_queue_tail_ptr( \ + NBL_CHAN_MGT_TO_HW_PRIV(_chan_mgt), \ + _tail_ptr, _qid); \ + else \ + (_hw_ops)->update_mailbox_queue_tail_ptr( \ + NBL_CHAN_MGT_TO_HW_PRIV(_chan_mgt), \ + _tail_ptr, _qid); \ + } while (0) + +static int nbl_chan_alloc_all_rx_bufs(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(chan_mgt->common); + struct device *dev = NBL_COMMON_TO_DEV(chan_mgt->common); + struct nbl_chan_ring *rxq = &chan_info->rxq; + struct nbl_chan_rx_desc *desc; + struct nbl_chan_buf *buf; + struct nbl_hw_ops *hw_ops; + u32 retry_times = 0; + u16 i; + + hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + + for (i = 0; i < chan_info->num_rxq_entries; i++) { + buf = &rxq->buf[i]; + buf->va = dmam_alloc_coherent(dma_dev, chan_info->rxq_buf_size, + &buf->pa, + GFP_KERNEL | __GFP_ZERO); + if (!buf->va) { + dev_err(dev, + "Allocate buffer for chan rx queue failed\n"); + goto err; + } + } + + desc = rxq->desc; + for (i = 0; i < chan_info->num_rxq_entries - 1; i++) { + buf = &rxq->buf[i]; + desc[i].flags = NBL_CHAN_RX_DESC_AVAIL; + desc[i].buf_addr = buf->pa; + desc[i].buf_len = chan_info->rxq_buf_size; + } + + rxq->next_to_clean = 0; + rxq->next_to_use = chan_info->num_rxq_entries - 1; + rxq->tail_ptr = chan_info->num_rxq_entries - 1; + + /* mb for notify */ + mb(); + + NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, rxq->tail_ptr, + NBL_MB_RX_QID); + + for (retry_times = 0; retry_times < 3; retry_times++) { + NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, + rxq->tail_ptr, NBL_MB_RX_QID); + usleep_range(NBL_CHAN_TX_WAIT_US * 50, + NBL_CHAN_TX_WAIT_US * 60); + } + + return 0; +err: + while (i--) { + buf = &rxq->buf[i]; + dmam_free_coherent(dma_dev, chan_info->rxq_buf_size, buf->va, + buf->pa); + buf->va = NULL; + buf->pa = 0; + } + + return -ENOMEM; +} + +static void nbl_chan_free_all_rx_bufs(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + struct device *dma_dev = NBL_COMMON_TO_DMA_DEV(chan_mgt->common); + struct nbl_chan_ring *rxq = &chan_info->rxq; + struct nbl_chan_buf *buf; + u16 i; + + for (i = 0; i < chan_info->num_rxq_entries; i++) { + buf = &rxq->buf[i]; + dmam_free_coherent(dma_dev, chan_info->rxq_buf_size, buf->va, + buf->pa); + buf->va = NULL; + buf->pa = 0; + } +} + +static int nbl_chan_alloc_all_bufs(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + int err; + + err = nbl_chan_alloc_all_tx_bufs(chan_mgt, chan_info); + if (err) + return err; + + err = nbl_chan_alloc_all_rx_bufs(chan_mgt, chan_info); + if (err) + goto alloc_rx_bufs_err; + + return 0; + +alloc_rx_bufs_err: + nbl_chan_free_all_tx_bufs(chan_mgt, chan_info); + return err; +} + +static void nbl_chan_stop_queue(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + struct nbl_hw_ops *hw_ops; + + hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + + if (nbl_chan_is_admiq(chan_info)) { + hw_ops->stop_adminq_rxq(NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt)); + hw_ops->stop_adminq_txq(NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt)); + } else { + hw_ops->stop_mailbox_rxq(NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt)); + hw_ops->stop_mailbox_txq(NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt)); + } +} + +static void nbl_chan_free_all_bufs(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + nbl_chan_free_all_tx_bufs(chan_mgt, chan_info); + nbl_chan_free_all_rx_bufs(chan_mgt, chan_info); +} + +static void nbl_chan_remove_queue(struct nbl_common_info *common, + struct nbl_chan_info *chan_info) +{ + nbl_chan_remove_tx_queue(common, chan_info); + nbl_chan_remove_rx_queue(common, chan_info); +} + +static int nbl_chan_teardown_queue(void *priv, u8 chan_type) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + struct nbl_common_info *common = chan_mgt->common; + + nbl_chan_stop_queue(chan_mgt, chan_info); + + nbl_chan_free_all_bufs(chan_mgt, chan_info); + + nbl_chan_remove_queue(common, chan_info); + + return 0; +} + +static int nbl_chan_setup_queue(void *priv, u8 chan_type) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + int err; + + nbl_chan_init_queue_param(chan_info, NBL_CHAN_QUEUE_LEN, + NBL_CHAN_QUEUE_LEN, NBL_CHAN_BUF_LEN, + NBL_CHAN_BUF_LEN); + + err = nbl_chan_init_queue(common, chan_info); + if (err) + return err; + + nbl_chan_config_queue(chan_mgt, chan_info, true); /* tx */ + nbl_chan_config_queue(chan_mgt, chan_info, false); /* rx */ + + err = nbl_chan_alloc_all_bufs(chan_mgt, chan_info); + if (err) + goto chan_q_setup_fail; + + return 0; + +chan_q_setup_fail: + nbl_chan_teardown_queue(chan_mgt, chan_type); + return err; +} + +static void nbl_chan_shutdown_queue(struct nbl_channel_mgt *chan_mgt, + u8 chan_type, bool tx) +{ + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + void *p = NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt); + struct nbl_hw_ops *hw_ops; + + hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + + if (tx) { + if (nbl_chan_is_admiq(chan_info)) + hw_ops->stop_adminq_txq(p); + else + hw_ops->stop_mailbox_txq(p); + + nbl_chan_free_all_tx_bufs(chan_mgt, chan_info); + nbl_chan_remove_tx_queue(common, chan_info); + } else { + if (nbl_chan_is_admiq(chan_info)) + hw_ops->stop_adminq_rxq(p); + else + hw_ops->stop_mailbox_rxq(p); + + nbl_chan_free_all_rx_bufs(chan_mgt, chan_info); + nbl_chan_remove_rx_queue(common, chan_info); + } +} + +static int nbl_chan_start_txq(struct nbl_channel_mgt *chan_mgt, u8 chan_type) +{ + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + int ret; + + ret = nbl_chan_init_tx_queue(common, chan_info); + if (ret) + return ret; + + nbl_chan_config_queue(chan_mgt, chan_info, true); /* tx */ + + ret = nbl_chan_alloc_all_tx_bufs(chan_mgt, chan_info); + if (ret) + goto alloc_buf_failed; + + return 0; + +alloc_buf_failed: + nbl_chan_shutdown_queue(chan_mgt, chan_type, true); + return ret; +} + +static int nbl_chan_start_rxq(struct nbl_channel_mgt *chan_mgt, u8 chan_type) +{ + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + int ret; + + ret = nbl_chan_init_rx_queue(common, chan_info); + if (ret) + return ret; + + nbl_chan_config_queue(chan_mgt, chan_info, false); /* rx */ + + ret = nbl_chan_alloc_all_rx_bufs(chan_mgt, chan_info); + if (ret) + goto alloc_buf_failed; + + return 0; + +alloc_buf_failed: + nbl_chan_shutdown_queue(chan_mgt, chan_type, false); + return ret; +} + +static int nbl_chan_reset_queue(struct nbl_channel_mgt *chan_mgt, u8 chan_type, + bool tx) +{ + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + int i = 0, j = 0, ret = 0; + + /* If someone else is doing resetting, don't bother */ + if (test_bit(NBL_CHAN_RESETTING, chan_info->state)) + return 0; + + /* Make sure rx won't enter if we are resetting */ + set_bit(NBL_CHAN_RESETTING, chan_info->state); + if (chan_info->clean_task) + nbl_common_flush_task(chan_info->clean_task); + + /* Make sure tx won't enter if we are resetting */ + spin_lock(&chan_info->txq_lock); + + /* If we are in a race, and someone else has finished it, just return */ + if (!test_bit(NBL_CHAN_RESETTING, chan_info->state)) { + spin_unlock(&chan_info->txq_lock); + return 0; + } + + /* Make sure no one is waiting before we reset. */ + while (i++ < (NBL_CHAN_ACK_WAIT_TIME * 2) / HZ) { + for (j = 0; j < NBL_CHAN_QUEUE_LEN; j++) + if (chan_info->wait[j].status == NBL_MBX_STATUS_WAITING) + break; + + if (j == NBL_CHAN_QUEUE_LEN) + break; + mdelay(1000); + } + + if (j != NBL_CHAN_QUEUE_LEN) { + nbl_warn(NBL_CHAN_MGT_TO_COMMON(chan_mgt), + "Some wait_head unreleased, fail to reset"); + clear_bit(NBL_CHAN_RESETTING, chan_info->state); + spin_unlock(&chan_info->txq_lock); + return 0; + } + + nbl_chan_shutdown_queue(chan_mgt, chan_type, tx); + + if (tx) + ret = nbl_chan_start_txq(chan_mgt, chan_type); + else + ret = nbl_chan_start_rxq(chan_mgt, chan_type); + + /* Make sure we clear this bit inside lock, so that we don't reset it + * twice if race + */ + clear_bit(NBL_CHAN_RESETTING, chan_info->state); + spin_unlock(&chan_info->txq_lock); + + return ret; +} + +static bool nbl_chan_check_dma_err(struct nbl_channel_mgt *chan_mgt, + u8 chan_type, bool tx) +{ + struct nbl_hw_ops *hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + void *p = NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt); + + if (hw_ops->get_hw_status(p)) + return false; + + if (chan_type == NBL_CHAN_TYPE_MAILBOX) + return hw_ops->check_mailbox_dma_err(p, tx); + else + return hw_ops->check_adminq_dma_err(p, tx); +} + +static int nbl_chan_update_txqueue(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info, + struct nbl_chan_tx_param *param) +{ + struct nbl_chan_ring *txq = &chan_info->txq; + struct nbl_chan_tx_desc *tx_desc = + NBL_CHAN_TX_RING_TO_DESC(txq, txq->next_to_use); + struct nbl_chan_buf *tx_buf = + NBL_CHAN_TX_RING_TO_BUF(txq, txq->next_to_use); + + if (param->arg_len > NBL_CHAN_BUF_LEN - sizeof(*tx_desc)) + return -EINVAL; + + tx_desc->dstid = param->dstid; + tx_desc->msg_type = param->msg_type; + tx_desc->msgid = param->msgid; + + if (param->arg_len > NBL_CHAN_TX_DESC_EMBEDDED_DATA_LEN) { + memcpy(tx_buf->va, param->arg, param->arg_len); + tx_desc->buf_addr = tx_buf->pa; + tx_desc->buf_len = param->arg_len; + tx_desc->data_len = 0; + } else { + memcpy(tx_desc->data, param->arg, param->arg_len); + tx_desc->buf_len = 0; + tx_desc->data_len = param->arg_len; + } + tx_desc->flags = NBL_CHAN_TX_DESC_AVAIL; + + /* wmb */ + wmb(); + txq->next_to_use = + NBL_NEXT_ID(txq->next_to_use, chan_info->num_txq_entries - 1); + txq->tail_ptr++; + + return 0; +} + +static int nbl_chan_kick_tx_ring(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + struct nbl_hw_ops *hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + struct nbl_chan_ring *txq = &chan_info->txq; + struct nbl_chan_tx_desc *tx_desc; + int i = 0; + + /* mb for tx notify */ + mb(); + + NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, txq->tail_ptr, + NBL_MB_TX_QID); + + tx_desc = NBL_CHAN_TX_RING_TO_DESC(txq, txq->next_to_clean); + + while (!(tx_desc->flags & NBL_CHAN_TX_DESC_USED)) { + udelay(NBL_CHAN_TX_WAIT_US); + i++; + + if (!(i % NBL_CHAN_TX_REKICK_WAIT_TIMES)) + NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, + txq->tail_ptr, NBL_MB_TX_QID); + + if (i == NBL_CHAN_TX_WAIT_TIMES) { + nbl_err(common, "chan send message type: %d timeout\n", + tx_desc->msg_type); + return -EAGAIN; + } + } + + txq->next_to_clean = txq->next_to_use; + return 0; +} + +static void nbl_chan_recv_ack_msg(void *priv, u16 srcid, u16 msgid, void *data, + u32 data_len) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + struct nbl_chan_info *chan_info = NULL; + struct nbl_chan_waitqueue_head *wait_head = NULL; + union nbl_chan_msg_id ack_msgid = { { 0 } }; + u32 *payload = (u32 *)data; + u32 ack_datalen = 0, ack_msgtype = 0, copy_len = 0; + + if (srcid == NBL_CHAN_ADMINQ_FUNCTION_ID) + chan_info = NBL_CHAN_MGT_TO_ADMINQ(chan_mgt); + else + chan_info = NBL_CHAN_MGT_TO_MBX(chan_mgt); + + ack_datalen = data_len - 3 * sizeof(u32); + ack_msgtype = *payload; + ack_msgid.id = *(u16 *)(payload + 1); + wait_head = &chan_info->wait[ack_msgid.info.loc]; + wait_head->ack_err = *(payload + 2); + chan_info->failed_cnt = 0; + + if (wait_head->msg_type != ack_msgtype) { + nbl_warn(common, + "Skip ack msg type %d donot match msg type %d\n", + ack_msgtype, wait_head->msg_type); + return; + } + + if (wait_head->status != NBL_MBX_STATUS_WAITING) { + nbl_warn(common, "Skip ack with status %d", wait_head->status); + return; + } + + if (wait_head->msg_index != ack_msgid.info.index) { + nbl_warn(common, "Skip ack index %d donot match index %d", + ack_msgid.info.index, wait_head->msg_index); + return; + } + + if (ack_datalen != wait_head->ack_data_len) + nbl_debug(common, + "Channel payload_len donot match ack_data_len, msgtype:%u, msgid:%u, rcv_data_len:%u, expect_data_len:%u\n", + ack_msgtype, ack_msgid.id, ack_datalen, + wait_head->ack_data_len); + + copy_len = min_t(u32, wait_head->ack_data_len, ack_datalen); + if (wait_head->ack_err >= 0 && copy_len > 0) + memcpy((char *)wait_head->ack_data, payload + 3, copy_len); + wait_head->ack_data_len = (u16)copy_len; + + /* wmb */ + wmb(); + wait_head->acked = 1; + if (wait_head->need_waked) + wake_up(&wait_head->wait_queue); +} + +static void nbl_chan_recv_msg(struct nbl_channel_mgt *chan_mgt, void *data, + u32 data_len) +{ + struct device *dev = NBL_COMMON_TO_DEV(chan_mgt->common); + struct nbl_chan_ack_info chan_ack; + struct nbl_chan_tx_desc *tx_desc; + struct nbl_chan_msg_node_data *msg_handler; + u16 msg_type, payload_len, srcid, msgid; + void *payload; + + tx_desc = data; + msg_type = tx_desc->msg_type; + dev_dbg(dev, "recv msg_type: %d\n", tx_desc->msg_type); + + srcid = tx_desc->srcid; + msgid = tx_desc->msgid; + if (msg_type >= NBL_CHAN_MSG_MAX) + goto send_warning; + + if (tx_desc->data_len) { + payload = (void *)tx_desc->data; + payload_len = tx_desc->data_len; + } else { + payload = (void *)(tx_desc + 1); + payload_len = tx_desc->buf_len; + } + + msg_handler = + nbl_common_get_hash_node(chan_mgt->handle_hash_tbl, &msg_type); + if (msg_handler) { + msg_handler->func(msg_handler->priv, srcid, msgid, payload, + payload_len); + return; + } + +send_warning: + NBL_CHAN_ACK(chan_ack, srcid, msg_type, msgid, -EPERM, NULL, 0); + nbl_chan_send_ack(chan_mgt, &chan_ack); + dev_warn(dev, "Recv channel msg_type: %d, but msg_handler is null!\n", + msg_type); +} + +static void nbl_chan_advance_rx_ring(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info, + struct nbl_chan_ring *rxq) +{ + struct nbl_chan_rx_desc *rx_desc; + struct nbl_hw_ops *hw_ops; + struct nbl_chan_buf *rx_buf; + u16 next_to_use; + + hw_ops = NBL_CHAN_MGT_TO_HW_OPS(chan_mgt); + + next_to_use = rxq->next_to_use; + rx_desc = NBL_CHAN_RX_RING_TO_DESC(rxq, next_to_use); + rx_buf = NBL_CHAN_RX_RING_TO_BUF(rxq, next_to_use); + + rx_desc->flags = NBL_CHAN_RX_DESC_AVAIL; + rx_desc->buf_addr = rx_buf->pa; + rx_desc->buf_len = chan_info->rxq_buf_size; + + /* wmb */ + wmb(); + rxq->next_to_use++; + if (rxq->next_to_use == chan_info->num_rxq_entries) + rxq->next_to_use = 0; + rxq->tail_ptr++; + + NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, hw_ops, chan_mgt, rxq->tail_ptr, + NBL_MB_RX_QID); +} + +static void nbl_chan_clean_queue(struct nbl_channel_mgt *chan_mgt, + struct nbl_chan_info *chan_info) +{ + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + struct nbl_chan_ring *rxq = &chan_info->rxq; + struct nbl_chan_rx_desc *rx_desc; + struct nbl_chan_buf *rx_buf; + u16 next_to_clean; + + next_to_clean = rxq->next_to_clean; + rx_desc = NBL_CHAN_RX_RING_TO_DESC(rxq, next_to_clean); + rx_buf = NBL_CHAN_RX_RING_TO_BUF(rxq, next_to_clean); + while (rx_desc->flags & NBL_CHAN_RX_DESC_USED) { + if (!(rx_desc->flags & NBL_CHAN_RX_DESC_WRITE)) + nbl_debug(common, + "mailbox rx flag 0x%x has no NBL_CHAN_RX_DESC_WRITE\n", + rx_desc->flags); + + dma_rmb(); + nbl_chan_recv_msg(chan_mgt, rx_buf->va, rx_desc->buf_len); + + nbl_chan_advance_rx_ring(chan_mgt, chan_info, rxq); + + next_to_clean++; + if (next_to_clean == chan_info->num_rxq_entries) + next_to_clean = 0; + rx_desc = NBL_CHAN_RX_RING_TO_DESC(rxq, next_to_clean); + rx_buf = NBL_CHAN_RX_RING_TO_BUF(rxq, next_to_clean); + } + rxq->next_to_clean = next_to_clean; +} + +static void nbl_chan_clean_queue_subtask(void *priv, u8 chan_type) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + + if (!test_bit(NBL_CHAN_INTERRUPT_READY, chan_info->state) || + test_bit(NBL_CHAN_RESETTING, chan_info->state)) + return; + + nbl_chan_clean_queue(chan_mgt, chan_info); +} + +static int nbl_chan_get_msg_id(struct nbl_chan_info *chan_info, + union nbl_chan_msg_id *msgid) +{ + struct nbl_chan_waitqueue_head *wait = NULL; + int valid_loc = chan_info->wait_head_index, i; + + for (i = 0; i < NBL_CHAN_QUEUE_LEN; i++) { + wait = &chan_info->wait[valid_loc]; + + if (wait->status != NBL_MBX_STATUS_WAITING) { + wait->msg_index = NBL_NEXT_ID(wait->msg_index, + NBL_CHAN_MSG_INDEX_MAX); + msgid->info.index = wait->msg_index; + msgid->info.loc = valid_loc; + + valid_loc = NBL_NEXT_ID(valid_loc, + chan_info->num_txq_entries - 1); + chan_info->wait_head_index = valid_loc; + return 0; + } + + valid_loc = + NBL_NEXT_ID(valid_loc, chan_info->num_txq_entries - 1); + } + + return -ENOSPC; +} + +static int nbl_chan_send_msg(void *priv, struct nbl_chan_send_info *chan_send) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt); + struct nbl_chan_info *chan_info = + NBL_CHAN_GET_INFO(chan_mgt, chan_send->dstid); + struct nbl_chan_waitqueue_head *wait_head; + union nbl_chan_msg_id msgid = { { 0 } }; + struct nbl_chan_tx_param tx_param = { 0 }; + int i = NBL_CHAN_TX_WAIT_ACK_TIMES, resend_times = 0, ret = 0; + bool need_resend = true; /* neend resend when ack timeout*/ + + if (chan_send->arg_len > + NBL_CHAN_BUF_LEN - sizeof(struct nbl_chan_tx_desc)) + return -EINVAL; + + if (test_bit(NBL_CHAN_ABNORMAL, chan_info->state)) + return -EFAULT; + + if (chan_info->failed_cnt >= NBL_CHANNEL_FREEZE_FAILED_CNT) + return -EFAULT; + +resend: + spin_lock(&chan_info->txq_lock); + + ret = nbl_chan_get_msg_id(chan_info, &msgid); + if (ret) { + spin_unlock(&chan_info->txq_lock); + nbl_err(common, + "Channel tx wait head full, send msgtype:%u to dstid:%u failed\n", + chan_send->msg_type, chan_send->dstid); + return ret; + } + + tx_param.msg_type = chan_send->msg_type; + tx_param.arg = chan_send->arg; + tx_param.arg_len = chan_send->arg_len; + tx_param.dstid = chan_send->dstid; + tx_param.msgid = msgid.id; + + ret = nbl_chan_update_txqueue(chan_mgt, chan_info, &tx_param); + if (ret) { + spin_unlock(&chan_info->txq_lock); + nbl_err(common, + "Channel tx queue full, send msgtype:%u to dstid:%u failed\n", + chan_send->msg_type, chan_send->dstid); + return ret; + } + + wait_head = &chan_info->wait[msgid.info.loc]; + init_waitqueue_head(&wait_head->wait_queue); + wait_head->acked = 0; + wait_head->ack_data = chan_send->resp; + wait_head->ack_data_len = chan_send->resp_len; + wait_head->msg_type = chan_send->msg_type; + wait_head->need_waked = chan_send->ack; + wait_head->msg_index = msgid.info.index; + wait_head->status = chan_send->ack ? NBL_MBX_STATUS_WAITING : + NBL_MBX_STATUS_IDLE; + + ret = nbl_chan_kick_tx_ring(chan_mgt, chan_info); + + spin_unlock(&chan_info->txq_lock); + + if (ret) { + wait_head->status = NBL_MBX_STATUS_TIMEOUT; + goto check_tx_dma_err; + } + + if (!chan_send->ack) + return 0; + + if (chan_send->dstid != common->mgt_pf && + chan_send->msg_type != NBL_CHAN_MSG_KEEP_ALIVE) + need_resend = false; + + if (test_bit(NBL_CHAN_INTERRUPT_READY, chan_info->state)) { + ret = wait_event_timeout(wait_head->wait_queue, + wait_head->acked, + NBL_CHAN_ACK_WAIT_TIME); + if (!ret) { + wait_head->status = NBL_MBX_STATUS_TIMEOUT; + if (!need_resend) { + chan_info->failed_cnt++; + return 0; + } + nbl_err(common, + "Channel waiting ack failed, message type: %d, msg id: %u\n", + chan_send->msg_type, msgid.id); + goto check_rx_dma_err; + } + + /* rmb for waithead ack */ + rmb(); + chan_send->ack_len = wait_head->ack_data_len; + wait_head->status = NBL_MBX_STATUS_IDLE; + chan_info->failed_cnt = 0; + + return wait_head->ack_err; + } + + /*polling wait mailbox ack*/ + while (i--) { + nbl_chan_clean_queue(chan_mgt, chan_info); + + if (wait_head->acked) { + chan_send->ack_len = wait_head->ack_data_len; + wait_head->status = NBL_MBX_STATUS_IDLE; + chan_info->failed_cnt = 0; + return wait_head->ack_err; + } + usleep_range(NBL_CHAN_TX_WAIT_ACK_US_MIN, + NBL_CHAN_TX_WAIT_ACK_US_MAX); + } + + wait_head->status = NBL_MBX_STATUS_TIMEOUT; + nbl_err(common, + "Channel polling ack failed, message type: %d msg id: %u\n", + chan_send->msg_type, msgid.id); + +check_rx_dma_err: + if (nbl_chan_check_dma_err(chan_mgt, chan_info->chan_type, false)) { + nbl_err(common, "nbl channel rx dma error\n"); + nbl_chan_reset_queue(chan_mgt, chan_info->chan_type, false); + chan_info->rxq_reset_times++; + } + +check_tx_dma_err: + if (nbl_chan_check_dma_err(chan_mgt, chan_info->chan_type, true)) { + nbl_err(common, "nbl channel tx dma error\n"); + nbl_chan_reset_queue(chan_mgt, chan_info->chan_type, true); + chan_info->txq_reset_times++; + } + + if (++resend_times >= NBL_CHAN_RESEND_MAX_TIMES) { + nbl_err(common, "nbl channel resend_times %d\n", resend_times); + chan_info->failed_cnt++; + + return -EFAULT; + } + + i = NBL_CHAN_TX_WAIT_ACK_TIMES; + goto resend; +} + +static int nbl_chan_send_ack(void *priv, struct nbl_chan_ack_info *chan_ack) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + u32 len = 3 * sizeof(u32) + chan_ack->data_len; + struct nbl_chan_send_info chan_send; + u32 *tmp; + + tmp = kzalloc(len, GFP_ATOMIC); + if (!tmp) + return -ENOMEM; + + tmp[0] = chan_ack->msg_type; + tmp[1] = chan_ack->msgid; + tmp[2] = (u32)chan_ack->err; + if (chan_ack->data && chan_ack->data_len) + memcpy(&tmp[3], chan_ack->data, chan_ack->data_len); + + NBL_CHAN_SEND(chan_send, chan_ack->dstid, NBL_CHAN_MSG_ACK, tmp, len, + NULL, 0, 0); + nbl_chan_send_msg(chan_mgt, &chan_send); + kfree(tmp); + + return 0; +} + +static void nbl_chan_unregister_msg(void *priv, u16 msg_type) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + + nbl_chan_delete_msg_handler(chan_mgt, msg_type); +} + +static int nbl_chan_register_msg(void *priv, u16 msg_type, nbl_chan_resp func, + void *callback_priv) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + int ret; + + ret = nbl_chan_add_msg_handler(chan_mgt, msg_type, func, callback_priv); + + return ret; +} + +static bool nbl_chan_check_queue_exist(void *priv, u8 chan_type) +{ + struct nbl_channel_mgt *chan_mgt; + struct nbl_chan_info *chan_info; + + if (!priv) + return false; + + chan_mgt = (struct nbl_channel_mgt *)priv; + chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + + return chan_info ? true : false; +} + +static void nbl_chan_keepalive_resp(void *priv, u16 srcid, u16 msgid, + void *data, u32 data_len) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_ack_info chan_ack; + + NBL_CHAN_ACK(chan_ack, srcid, NBL_CHAN_MSG_KEEP_ALIVE, msgid, 0, NULL, + 0); + + nbl_chan_send_ack(chan_mgt, &chan_ack); +} + +static void nbl_chan_keepalive(struct delayed_work *work) +{ + struct nbl_chan_keepalive_info *keepalive = + container_of(work, struct nbl_chan_keepalive_info, + keepalive_task); + struct nbl_channel_mgt *chan_mgt = + (struct nbl_channel_mgt *)keepalive->chan_mgt; + struct nbl_chan_send_info chan_send; + u32 delay_time; + + NBL_CHAN_SEND(chan_send, keepalive->keepalive_dest, + NBL_CHAN_MSG_KEEP_ALIVE, NULL, 0, NULL, 0, 1); + + if (nbl_chan_send_msg(chan_mgt, &chan_send)) { + if (keepalive->fail_cnt < + NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_THRESH) + keepalive->fail_cnt++; + + if (keepalive->fail_cnt >= + NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_THRESH && + keepalive->timeout < NBL_CHAN_KEEPALIVE_MAX_TIMEOUT) { + get_random_bytes(&delay_time, sizeof(delay_time)); + keepalive->timeout += + delay_time % + NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_GAP; + + keepalive->fail_cnt = 0; + } + } else { + if (keepalive->success_cnt < + NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_THRESH) + keepalive->success_cnt++; + + if (keepalive->success_cnt >= + NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_THRESH && + keepalive->timeout > + NBL_CHAN_KEEPALIVE_DEFAULT_TIMEOUT * 2) { + get_random_bytes(&delay_time, sizeof(delay_time)); + keepalive->timeout -= + delay_time % + NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_GAP; + + keepalive->success_cnt = 0; + } + } + + nbl_common_q_dwork_keepalive(work, + jiffies_to_msecs(keepalive->timeout)); +} + +static int nbl_chan_setup_keepalive(void *priv, u16 dest_id, u8 chan_type) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + struct nbl_chan_keepalive_info *keepalive = &chan_info->keepalive; + u32 delay_time; + + get_random_bytes(&delay_time, sizeof(delay_time)); + delay_time = delay_time % NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_GAP; + + keepalive->timeout = NBL_CHAN_KEEPALIVE_DEFAULT_TIMEOUT + delay_time; + keepalive->chan_mgt = chan_mgt; + keepalive->keepalive_dest = dest_id; + keepalive->success_cnt = 0; + keepalive->fail_cnt = 0; + + nbl_chan_add_msg_handler(chan_mgt, NBL_CHAN_MSG_KEEP_ALIVE, + nbl_chan_keepalive_resp, chan_mgt); + + nbl_common_alloc_delayed_task(&keepalive->keepalive_task, + nbl_chan_keepalive); + keepalive->task_setuped = true; + + nbl_common_q_dwork_keepalive(&keepalive->keepalive_task, + jiffies_to_msecs(keepalive->timeout)); + + return 0; +} + +static void nbl_chan_remove_keepalive(void *priv, u8 chan_type) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + + if (!chan_info->keepalive.task_setuped) + return; + + nbl_common_release_delayed_task(&chan_info->keepalive.keepalive_task); + chan_info->keepalive.task_setuped = false; +} + +static void nbl_chan_register_chan_task(void *priv, u8 chan_type, + struct work_struct *task) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + + chan_info->clean_task = task; +} + +static void nbl_chan_set_queue_state(void *priv, enum nbl_chan_state state, + u8 chan_type, u8 set) +{ + struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv; + struct nbl_chan_info *chan_info = + NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type); + + if (set) + set_bit(state, chan_info->state); + else + clear_bit(state, chan_info->state); +} + +static struct nbl_channel_ops chan_ops = { + .send_msg = nbl_chan_send_msg, + .send_ack = nbl_chan_send_ack, + .register_msg = nbl_chan_register_msg, + .unregister_msg = nbl_chan_unregister_msg, + .cfg_chan_qinfo_map_table = nbl_chan_cfg_qinfo_map_table, + .check_queue_exist = nbl_chan_check_queue_exist, + .setup_queue = nbl_chan_setup_queue, + .teardown_queue = nbl_chan_teardown_queue, + .clean_queue_subtask = nbl_chan_clean_queue_subtask, + + .setup_keepalive = nbl_chan_setup_keepalive, + .remove_keepalive = nbl_chan_remove_keepalive, + .register_chan_task = nbl_chan_register_chan_task, + .set_queue_state = nbl_chan_set_queue_state, +}; + +static int +nbl_chan_setup_chan_mgt(struct nbl_adapter *adapter, + struct nbl_init_param *param, + struct nbl_channel_mgt_leonis **chan_mgt_leonis) +{ + struct nbl_common_info *common; + struct nbl_hw_ops_tbl *hw_ops_tbl; + struct nbl_chan_info *mailbox; + struct nbl_chan_info *adminq = NULL; + struct device *dev; + int ret; + + dev = NBL_ADAP_TO_DEV(adapter); + common = NBL_ADAP_TO_COMMON(adapter); + hw_ops_tbl = NBL_ADAP_TO_HW_OPS_TBL(adapter); + + *chan_mgt_leonis = devm_kzalloc(dev, + sizeof(struct nbl_channel_mgt_leonis), + GFP_KERNEL); + if (!*chan_mgt_leonis) + goto alloc_channel_mgt_leonis_fail; + + NBL_CHAN_MGT_TO_COMMON(&(*chan_mgt_leonis)->chan_mgt) = common; + (*chan_mgt_leonis)->chan_mgt.hw_ops_tbl = hw_ops_tbl; + + mailbox = devm_kzalloc(dev, sizeof(struct nbl_chan_info), GFP_KERNEL); + if (!mailbox) + goto alloc_mailbox_fail; + mailbox->chan_type = NBL_CHAN_TYPE_MAILBOX; + NBL_CHAN_MGT_TO_MBX(&(*chan_mgt_leonis)->chan_mgt) = mailbox; + + if (param->caps.has_ctrl) { + adminq = devm_kzalloc(dev, sizeof(struct nbl_chan_info), + GFP_KERNEL); + if (!adminq) + goto alloc_adminq_fail; + adminq->chan_type = NBL_CHAN_TYPE_ADMINQ; + NBL_CHAN_MGT_TO_ADMINQ(&(*chan_mgt_leonis)->chan_mgt) = adminq; + } + + ret = nbl_chan_init_msg_handler(&(*chan_mgt_leonis)->chan_mgt); + if (ret) + goto init_chan_msg_handle; + + return 0; + +init_chan_msg_handle: + if (adminq) + devm_kfree(dev, adminq); +alloc_adminq_fail: + devm_kfree(dev, mailbox); +alloc_mailbox_fail: + devm_kfree(dev, *chan_mgt_leonis); + *chan_mgt_leonis = NULL; +alloc_channel_mgt_leonis_fail: + return -ENOMEM; +} + +static void +nbl_chan_remove_chan_mgt(struct nbl_common_info *common, + struct nbl_channel_mgt_leonis **chan_mgt) +{ + struct device *dev = NBL_COMMON_TO_DEV(common); + + nbl_chan_remove_msg_handler(&(*chan_mgt)->chan_mgt); + if (NBL_CHAN_MGT_TO_ADMINQ(&(*chan_mgt)->chan_mgt)) + devm_kfree(dev, + NBL_CHAN_MGT_TO_ADMINQ(&(*chan_mgt)->chan_mgt)); + devm_kfree(dev, NBL_CHAN_MGT_TO_MBX(&(*chan_mgt)->chan_mgt)); + + /* check and remove command queue */ + devm_kfree(dev, *chan_mgt); + *chan_mgt = NULL; +} + +static void nbl_chan_remove_ops(struct device *dev, + struct nbl_channel_ops_tbl **chan_ops_tbl) +{ + if (!dev || !chan_ops_tbl) + return; + + devm_kfree(dev, *chan_ops_tbl); + *chan_ops_tbl = NULL; +} + +static int nbl_chan_setup_ops(struct device *dev, + struct nbl_channel_ops_tbl **chan_ops_tbl, + struct nbl_channel_mgt_leonis *chan_mgt) +{ + int ret; + + *chan_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_channel_ops_tbl), + GFP_KERNEL); + if (!*chan_ops_tbl) + return -ENOMEM; + + NBL_CHAN_OPS_TBL_TO_OPS(*chan_ops_tbl) = &chan_ops; + NBL_CHAN_OPS_TBL_TO_PRIV(*chan_ops_tbl) = chan_mgt; + + if (!chan_mgt) + return 0; + + ret = nbl_chan_add_msg_handler(&chan_mgt->chan_mgt, NBL_CHAN_MSG_ACK, + nbl_chan_recv_ack_msg, chan_mgt); + if (ret) + goto err; + + return 0; + +err: + devm_kfree(dev, *chan_ops_tbl); + *chan_ops_tbl = NULL; + + return ret; +} + +int nbl_chan_init_common(void *p, struct nbl_init_param *param) +{ + struct nbl_adapter *adap = (struct nbl_adapter *)p; + struct nbl_channel_mgt_leonis **chan_mgt_leonis; + struct nbl_channel_ops_tbl **chan_ops_tbl; + struct nbl_common_info *common; + struct device *dev; + int ret = 0; + + dev = NBL_ADAP_TO_DEV(adap); + common = NBL_ADAP_TO_COMMON(adap); + chan_mgt_leonis = + (struct nbl_channel_mgt_leonis **)&NBL_ADAP_TO_CHAN_MGT(adap); + chan_ops_tbl = &NBL_ADAP_TO_CHAN_OPS_TBL(adap); + + ret = nbl_chan_setup_chan_mgt(adap, param, chan_mgt_leonis); + if (ret) + goto setup_mgt_fail; + + ret = nbl_chan_setup_ops(dev, chan_ops_tbl, *chan_mgt_leonis); + if (ret) + goto setup_ops_fail; + + return 0; + +setup_ops_fail: + nbl_chan_remove_chan_mgt(common, chan_mgt_leonis); +setup_mgt_fail: + return ret; +} + +void nbl_chan_remove_common(void *p) +{ + struct nbl_adapter *adap = (struct nbl_adapter *)p; + struct nbl_channel_mgt_leonis **chan_mgt_leonis; + struct nbl_channel_ops_tbl **chan_ops_tbl; + struct nbl_common_info *common; + struct device *dev; + + dev = NBL_ADAP_TO_DEV(adap); + common = NBL_ADAP_TO_COMMON(adap); + chan_mgt_leonis = + (struct nbl_channel_mgt_leonis **)&NBL_ADAP_TO_CHAN_MGT(adap); + chan_ops_tbl = &NBL_ADAP_TO_CHAN_OPS_TBL(adap); + + nbl_chan_remove_chan_mgt(common, chan_mgt_leonis); + nbl_chan_remove_ops(dev, chan_ops_tbl); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h new file mode 100644 index 000000000000..2d5c23b80f1d --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_channel/nbl_channel.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_CHANNEL_H_ +#define _NBL_CHANNEL_H_ + +#include "nbl_core.h" +#define NBL_CHAN_MGT_TO_COMMON(chan_mgt) ((chan_mgt)->common) +#define NBL_CHAN_MGT_TO_DEV(chan_mgt) \ + NBL_COMMON_TO_DEV(NBL_CHAN_MGT_TO_COMMON(chan_mgt)) +#define NBL_CHAN_MGT_TO_HW_OPS_TBL(chan_mgt) ((chan_mgt)->hw_ops_tbl) +#define NBL_CHAN_MGT_TO_HW_OPS(chan_mgt) \ + (NBL_CHAN_MGT_TO_HW_OPS_TBL(chan_mgt)->ops) +#define NBL_CHAN_MGT_TO_HW_PRIV(chan_mgt) \ + (NBL_CHAN_MGT_TO_HW_OPS_TBL(chan_mgt)->priv) +#define NBL_CHAN_MGT_TO_MBX(chan_mgt) \ + ((chan_mgt)->chan_info[NBL_CHAN_TYPE_MAILBOX]) +#define NBL_CHAN_MGT_TO_ADMINQ(chan_mgt) \ + ((chan_mgt)->chan_info[NBL_CHAN_TYPE_ADMINQ]) +#define NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt, chan_type) \ + ((chan_mgt)->chan_info[chan_type]) + +#define NBL_CHAN_TX_RING_TO_DESC(tx_ring, i) \ + (&(((struct nbl_chan_tx_desc *)((tx_ring)->desc))[i])) +#define NBL_CHAN_RX_RING_TO_DESC(rx_ring, i) \ + (&(((struct nbl_chan_rx_desc *)((rx_ring)->desc))[i])) +#define NBL_CHAN_TX_RING_TO_BUF(tx_ring, i) (&(((tx_ring)->buf)[i])) +#define NBL_CHAN_RX_RING_TO_BUF(rx_ring, i) (&(((rx_ring)->buf)[i])) + +#define NBL_CHAN_GET_INFO(chan_mgt, id) \ +({ \ + typeof(chan_mgt) _chan_mgt = (chan_mgt); \ + ((id) == NBL_CHAN_ADMINQ_FUNCTION_ID && \ + NBL_CHAN_MGT_TO_ADMINQ(_chan_mgt) ? \ + NBL_CHAN_MGT_TO_ADMINQ(_chan_mgt) : \ + NBL_CHAN_MGT_TO_MBX(_chan_mgt)); \ + }) + +#define NBL_CHAN_TX_WAIT_US 100 +#define NBL_CHAN_TX_REKICK_WAIT_TIMES 2000 +#define NBL_CHAN_TX_WAIT_TIMES 30000 + +#define NBL_CHAN_TX_WAIT_ACK_US_MIN 100 +#define NBL_CHAN_TX_WAIT_ACK_US_MAX 120 +#define NBL_CHAN_TX_WAIT_ACK_TIMES 50000 + +#define NBL_CHAN_QUEUE_LEN 256 +#define NBL_CHAN_BUF_LEN 4096 + +#define NBL_CHAN_TX_DESC_EMBEDDED_DATA_LEN 16 +#define NBL_CHAN_RESEND_MAX_TIMES 3 + +#define NBL_CHAN_TX_DESC_AVAIL BIT(0) +#define NBL_CHAN_TX_DESC_USED BIT(1) +#define NBL_CHAN_RX_DESC_WRITE BIT(1) +#define NBL_CHAN_RX_DESC_AVAIL BIT(3) +#define NBL_CHAN_RX_DESC_USED BIT(4) + +#define NBL_CHAN_ACK_WAIT_TIME (3 * HZ) + +#define NBL_CHAN_HANDLER_TBL_BUCKET_SIZE 512 + +enum { + NBL_MB_RX_QID = 0, + NBL_MB_TX_QID = 1, +}; + +enum { + NBL_MBX_STATUS_IDLE = 0, + NBL_MBX_STATUS_WAITING, + NBL_MBX_STATUS_TIMEOUT = -1, +}; + +struct nbl_chan_tx_param { + enum nbl_chan_msg_type msg_type; + void *arg; + size_t arg_len; + u16 dstid; + u16 msgid; +}; + +struct nbl_chan_buf { + void *va; + dma_addr_t pa; + size_t size; +}; + +struct nbl_chan_tx_desc { + u16 flags; + u16 srcid; + u16 dstid; + u16 data_len; + u16 buf_len; + u64 buf_addr; + u16 msg_type; + u8 data[16]; + u16 msgid; + u8 rsv[26]; +} __packed; + +struct nbl_chan_rx_desc { + u16 flags; + u32 buf_len; + u16 buf_id; + u64 buf_addr; +} __packed; + +struct nbl_chan_ring { + void *desc; + struct nbl_chan_buf *buf; + u16 next_to_use; + u16 tail_ptr; + u16 next_to_clean; + dma_addr_t dma; +}; + +#define NBL_CHAN_MSG_INDEX_MAX 63 + +union nbl_chan_msg_id { + struct nbl_chan_msg_id_info { + u16 index : 6; + u16 loc : 10; + } info; + u16 id; +}; + +struct nbl_chan_waitqueue_head { + struct wait_queue_head wait_queue; + char *ack_data; + int acked; + int ack_err; + u16 ack_data_len; + u16 need_waked; + u16 msg_type; + u8 status; + u8 msg_index; +}; + +#define NBL_CHAN_KEEPALIVE_DEFAULT_TIMEOUT (10 * HZ) +#define NBL_CHAN_KEEPALIVE_MAX_TIMEOUT (1024 * HZ) +#define NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_GAP (10 * HZ) +#define NBL_CHAN_KEEPALIVE_TIMEOUT_UPDATE_THRESH (3) + +struct nbl_chan_keepalive_info { + struct delayed_work keepalive_task; + void *chan_mgt; + u32 timeout; + u16 keepalive_dest; + u8 success_cnt; + u8 fail_cnt; + bool task_setuped; + u8 resv[3]; +}; + +struct nbl_chan_info { + struct nbl_chan_ring txq; + struct nbl_chan_ring rxq; + struct nbl_chan_waitqueue_head *wait; + /* spinlock_t */ + spinlock_t txq_lock; + + struct work_struct *clean_task; + struct nbl_chan_keepalive_info keepalive; + + u16 wait_head_index; + u16 num_txq_entries; + u16 num_rxq_entries; + u16 txq_buf_size; + u16 rxq_buf_size; + + u16 txq_reset_times; + u16 rxq_reset_times; + + DECLARE_BITMAP(state, NBL_CHAN_STATE_NBITS); + + u8 chan_type; + /* three consecutive fails will freeze the queue */ + u8 failed_cnt; +}; + +struct nbl_chan_msg_node_data { + nbl_chan_resp func; + void *priv; +}; + +struct nbl_channel_mgt { + struct nbl_common_info *common; + struct nbl_hw_ops_tbl *hw_ops_tbl; + struct nbl_chan_info *chan_info[NBL_CHAN_TYPE_MAX]; + struct nbl_cmdq_mgt *cmdq_mgt; + void *handle_hash_tbl; +}; + +/* Mgt structure for each product. + * Every indivisual mgt must have the common mgt as its first member, and + * contains its unique data structure in the reset of it. + */ +struct nbl_channel_mgt_leonis { + struct nbl_channel_mgt chan_mgt; +}; + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c new file mode 100644 index 000000000000..fe18a439b5d8 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.c @@ -0,0 +1,784 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include "nbl_common.h" + +struct nbl_common_wq_mgt { + struct workqueue_struct *ctrl_dev_wq; + struct workqueue_struct *net_dev_wq; + struct workqueue_struct *keepalive_wq; +}; + +void nbl_convert_mac(u8 *mac, u8 *reverse_mac) +{ + int i; + + for (i = 0; i < ETH_ALEN; i++) + reverse_mac[i] = mac[ETH_ALEN - 1 - i]; +} + +static struct nbl_common_wq_mgt *wq_mgt; + +void nbl_common_queue_work(struct work_struct *task, bool ctrl_task) +{ + if (ctrl_task) + queue_work(wq_mgt->ctrl_dev_wq, task); + else + queue_work(wq_mgt->net_dev_wq, task); +} + +void nbl_common_q_dwork(struct delayed_work *task, u32 msec, bool ctrl_task) +{ + if (ctrl_task) + queue_delayed_work(wq_mgt->ctrl_dev_wq, task, + msecs_to_jiffies(msec)); + else + queue_delayed_work(wq_mgt->net_dev_wq, task, + msecs_to_jiffies(msec)); +} + +void nbl_common_q_dwork_keepalive(struct delayed_work *task, u32 msec) +{ + queue_delayed_work(wq_mgt->keepalive_wq, task, msecs_to_jiffies(msec)); +} + +void nbl_common_release_task(struct work_struct *task) +{ + cancel_work_sync(task); +} + +void nbl_common_alloc_task(struct work_struct *task, void *func) +{ + INIT_WORK(task, func); +} + +void nbl_common_release_delayed_task(struct delayed_work *task) +{ + cancel_delayed_work_sync(task); +} + +void nbl_common_alloc_delayed_task(struct delayed_work *task, void *func) +{ + INIT_DELAYED_WORK(task, func); +} + +void nbl_common_flush_task(struct work_struct *task) +{ + flush_work(task); +} + +void nbl_common_destroy_wq(void) +{ + destroy_workqueue(wq_mgt->keepalive_wq); + destroy_workqueue(wq_mgt->net_dev_wq); + destroy_workqueue(wq_mgt->ctrl_dev_wq); + kfree(wq_mgt); +} + +int nbl_common_create_wq(void) +{ + wq_mgt = kzalloc(sizeof(*wq_mgt), GFP_KERNEL); + if (!wq_mgt) + return -ENOMEM; + + wq_mgt->ctrl_dev_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_UNBOUND, + 0, "nbl_ctrldev_wq"); + if (!wq_mgt->ctrl_dev_wq) { + pr_err("Failed to create workqueue nbl_ctrldev_wq\n"); + goto alloc_ctrl_dev_wq_failed; + } + + wq_mgt->net_dev_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_UNBOUND, + 0, "nbl_net_dev_wq"); + if (!wq_mgt->net_dev_wq) { + pr_err("Failed to create workqueue nbl_net_dev_wq\n"); + goto alloc_net_dev_wq_failed; + } + + wq_mgt->keepalive_wq = + alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_UNBOUND, + 0, "nbl_keepalive_wq"); + if (!wq_mgt->keepalive_wq) { + pr_err("Failed to create workqueue nbl_keepalive_wq\n"); + goto alloc_keepalive_wq_failed; + } + + return 0; + +alloc_keepalive_wq_failed: + destroy_workqueue(wq_mgt->net_dev_wq); +alloc_net_dev_wq_failed: + destroy_workqueue(wq_mgt->ctrl_dev_wq); +alloc_ctrl_dev_wq_failed: + kfree(wq_mgt); + return -ENOMEM; +} + +u32 nbl_common_pf_id_subtraction_mgtpf_id(struct nbl_common_info *common, + u32 pf_id) +{ + u32 diff = U32_MAX; + + if (pf_id >= NBL_COMMON_TO_MGT_PF(common)) + diff = pf_id - NBL_COMMON_TO_MGT_PF(common); + + return diff; +} + +static u32 nbl_common_calc_hash_key(void *key, u32 key_size, u32 bucket_size) +{ + u32 hash_val; + u32 value = 0; + u32 i; + + /* if bucket size little than 1, the hash value always 0 */ + if (bucket_size == NBL_HASH_TBL_LIST_BUCKET_SIZE) + return 0; + + for (i = 0; i < key_size; i++) + value += *((u8 *)key + i); + + hash_val = __hash_32(value); + + return hash_val % bucket_size; +} + +int nbl_common_find_free_idx(unsigned long *addr, u32 size, u32 idx_num, + u32 multiple) +{ + u32 idx_num_tmp; + u32 first_idx; + u32 next_idx; + u32 cur_idx; + + first_idx = find_first_zero_bit(addr, size); + /* most find a index */ + if (idx_num == 1) + return first_idx; + + while (first_idx < size) { + if (first_idx % multiple == 0) { + idx_num_tmp = idx_num - 1; + cur_idx = first_idx; + while (cur_idx < size && idx_num_tmp > 0) { + next_idx = find_next_zero_bit(addr, size, + cur_idx + 1); + if (next_idx - cur_idx != 1) + break; + idx_num_tmp--; + cur_idx = next_idx; + } + + /* has reach tail, return err */ + if (cur_idx >= size) + return size; + + /* has find available idx, return the begin idx */ + if (!idx_num_tmp) + return first_idx; + + first_idx = first_idx + multiple; + } else { + first_idx = first_idx + 1; + } + + first_idx = find_next_zero_bit(addr, size, first_idx); + } + + return size; +} + +/* + * alloc a hash table + * the table support multi thread + */ +void *nbl_common_init_hash_table(struct nbl_hash_tbl_key *key) +{ + struct nbl_hash_tbl_mgt *tbl_mgt; + int bucket_size; + int i; + + tbl_mgt = devm_kzalloc(key->dev, sizeof(struct nbl_hash_tbl_mgt), + GFP_KERNEL); + if (!tbl_mgt) + return NULL; + + bucket_size = key->bucket_size; + tbl_mgt->hash = devm_kcalloc(key->dev, bucket_size, + sizeof(struct hlist_head), GFP_KERNEL); + if (!tbl_mgt->hash) + goto alloc_hash_failed; + + for (i = 0; i < bucket_size; i++) + INIT_HLIST_HEAD(tbl_mgt->hash + i); + + memcpy(&tbl_mgt->tbl_key, key, sizeof(struct nbl_hash_tbl_key)); + + if (key->lock_need) + mutex_init(&tbl_mgt->lock); + + return tbl_mgt; + +alloc_hash_failed: + devm_kfree(key->dev, tbl_mgt); + + return NULL; +} + +/* + * alloc a hash node, and add to hlist_head + */ +int nbl_common_alloc_hash_node(void *priv, void *key, void *data, + void **out_data) +{ + struct nbl_hash_tbl_mgt *tbl_mgt = (struct nbl_hash_tbl_mgt *)priv; + struct nbl_hash_entry_node *hash_node; + u32 hash_val; + u16 key_size; + u16 data_size; + + hash_node = devm_kzalloc(tbl_mgt->tbl_key.dev, + sizeof(struct nbl_hash_entry_node), + GFP_KERNEL); + if (!hash_node) + return -1; + + key_size = tbl_mgt->tbl_key.key_size; + hash_node->key = + devm_kzalloc(tbl_mgt->tbl_key.dev, key_size, GFP_KERNEL); + if (!hash_node->key) + goto alloc_key_failed; + + data_size = tbl_mgt->tbl_key.data_size; + hash_node->data = + devm_kzalloc(tbl_mgt->tbl_key.dev, data_size, GFP_KERNEL); + if (!hash_node->data) + goto alloc_data_failed; + + memcpy(hash_node->key, key, key_size); + memcpy(hash_node->data, data, data_size); + + hash_val = nbl_common_calc_hash_key(key, key_size, + tbl_mgt->tbl_key.bucket_size); + + if (tbl_mgt->tbl_key.lock_need) + mutex_lock(&tbl_mgt->lock); + + hlist_add_head(&hash_node->node, tbl_mgt->hash + hash_val); + tbl_mgt->node_num++; + if (out_data) + *out_data = hash_node->data; + + if (tbl_mgt->tbl_key.lock_need) + mutex_unlock(&tbl_mgt->lock); + + return 0; + +alloc_data_failed: + devm_kfree(tbl_mgt->tbl_key.dev, hash_node->key); +alloc_key_failed: + devm_kfree(tbl_mgt->tbl_key.dev, hash_node); + + return -1; +} + +/* + * get a hash node, return the data if node exist + */ +void *nbl_common_get_hash_node(void *priv, void *key) +{ + struct nbl_hash_tbl_mgt *tbl_mgt = (struct nbl_hash_tbl_mgt *)priv; + struct nbl_hash_entry_node *hash_node; + struct hlist_head *head; + void *data = NULL; + u32 hash_val; + u16 key_size; + + key_size = tbl_mgt->tbl_key.key_size; + hash_val = nbl_common_calc_hash_key(key, key_size, + tbl_mgt->tbl_key.bucket_size); + head = tbl_mgt->hash + hash_val; + + if (tbl_mgt->tbl_key.lock_need) + mutex_lock(&tbl_mgt->lock); + + hlist_for_each_entry(hash_node, head, node) + if (!memcmp(hash_node->key, key, key_size)) { + data = hash_node->data; + break; + } + + if (tbl_mgt->tbl_key.lock_need) + mutex_unlock(&tbl_mgt->lock); + + return data; +} + +static void nbl_common_remove_hash_node(struct nbl_hash_tbl_mgt *tbl_mgt, + struct nbl_hash_entry_node *hash_node) +{ + hlist_del(&hash_node->node); + devm_kfree(tbl_mgt->tbl_key.dev, hash_node->key); + devm_kfree(tbl_mgt->tbl_key.dev, hash_node->data); + devm_kfree(tbl_mgt->tbl_key.dev, hash_node); + tbl_mgt->node_num--; +} + +/* + * free a hash node + */ +void nbl_common_free_hash_node(void *priv, void *key) +{ + struct nbl_hash_tbl_mgt *tbl_mgt = (struct nbl_hash_tbl_mgt *)priv; + struct nbl_hash_entry_node *hash_node; + struct hlist_head *head; + u32 hash_val; + u16 key_size; + + key_size = tbl_mgt->tbl_key.key_size; + hash_val = nbl_common_calc_hash_key(key, key_size, + tbl_mgt->tbl_key.bucket_size); + head = tbl_mgt->hash + hash_val; + + if (tbl_mgt->tbl_key.lock_need) + mutex_lock(&tbl_mgt->lock); + + hlist_for_each_entry(hash_node, head, node) + if (!memcmp(hash_node->key, key, key_size)) + break; + + if (hash_node) + nbl_common_remove_hash_node(tbl_mgt, hash_node); + + if (tbl_mgt->tbl_key.lock_need) + mutex_unlock(&tbl_mgt->lock); +} + +void nbl_common_remove_hash_table(void *priv, struct nbl_hash_tbl_del_key *key) +{ + struct nbl_hash_tbl_mgt *tbl_mgt = (struct nbl_hash_tbl_mgt *)priv; + struct nbl_hash_entry_node *hash_node; + struct hlist_node *safe_node; + struct hlist_head *head; + struct device *dev; + u32 i; + + if (!priv) + return; + + if (tbl_mgt->tbl_key.lock_need) + mutex_lock(&tbl_mgt->lock); + + for (i = 0; i < tbl_mgt->tbl_key.bucket_size; i++) { + head = tbl_mgt->hash + i; + hlist_for_each_entry_safe(hash_node, safe_node, head, node) { + if (key && key->action_func) + key->action_func(key->action_priv, + hash_node->key, + hash_node->data); + nbl_common_remove_hash_node(tbl_mgt, hash_node); + } + } + + devm_kfree(tbl_mgt->tbl_key.dev, tbl_mgt->hash); + + if (tbl_mgt->tbl_key.lock_need) + mutex_unlock(&tbl_mgt->lock); + + dev = tbl_mgt->tbl_key.dev; + devm_kfree(dev, tbl_mgt); +} + +/* + * alloc a hash x and y axis table + * it support x/y axis store if necessary, so it can scan by x/y axis; + * the table support multi thread + */ +void *nbl_common_init_hash_xy_table(struct nbl_hash_xy_tbl_key *key) +{ + struct nbl_hash_xy_tbl_mgt *tbl_mgt; + int i; + + tbl_mgt = devm_kzalloc(key->dev, sizeof(struct nbl_hash_xy_tbl_mgt), + GFP_KERNEL); + if (!tbl_mgt) + return NULL; + + tbl_mgt->hash = devm_kcalloc(key->dev, key->bucket_size, + sizeof(struct hlist_head), GFP_KERNEL); + if (!tbl_mgt->hash) + goto alloc_hash_failed; + + tbl_mgt->x_axis_hash = devm_kcalloc(key->dev, key->x_bucket_size, + sizeof(struct hlist_head), + GFP_KERNEL); + if (!tbl_mgt->x_axis_hash) + goto alloc_x_axis_hash_failed; + + tbl_mgt->y_axis_hash = devm_kcalloc(key->dev, key->y_bucket_size, + sizeof(struct hlist_head), + GFP_KERNEL); + if (!tbl_mgt->y_axis_hash) + goto alloc_y_axis_hash_failed; + + for (i = 0; i < key->bucket_size; i++) + INIT_HLIST_HEAD(tbl_mgt->hash + i); + + for (i = 0; i < key->x_bucket_size; i++) + INIT_HLIST_HEAD(tbl_mgt->x_axis_hash + i); + + for (i = 0; i < key->y_bucket_size; i++) + INIT_HLIST_HEAD(tbl_mgt->y_axis_hash + i); + + memcpy(&tbl_mgt->tbl_key, key, sizeof(struct nbl_hash_xy_tbl_key)); + + if (key->lock_need) + mutex_init(&tbl_mgt->lock); + + return tbl_mgt; + +alloc_y_axis_hash_failed: + devm_kfree(key->dev, tbl_mgt->x_axis_hash); +alloc_x_axis_hash_failed: + devm_kfree(key->dev, tbl_mgt->hash); +alloc_hash_failed: + devm_kfree(key->dev, tbl_mgt); + + return NULL; +} + +/* + * alloc a hash x and y node, and add to hlist_head + */ +int nbl_common_alloc_hash_xy_node(void *priv, void *x_key, void *y_key, + void *data) +{ + struct nbl_hash_xy_tbl_mgt *tbl_mgt = + (struct nbl_hash_xy_tbl_mgt *)priv; + struct nbl_hash_entry_xy_node *hash_node; + void *key; + + u32 hash_val, x_hash_val, y_hash_val; + + u16 key_size, x_key_size, y_key_size, data_size; + + hash_node = devm_kzalloc(tbl_mgt->tbl_key.dev, + sizeof(struct nbl_hash_entry_xy_node), + GFP_KERNEL); + if (!hash_node) + return -1; + + x_key_size = tbl_mgt->tbl_key.x_key_size; + hash_node->x_axis_key = + devm_kzalloc(tbl_mgt->tbl_key.dev, x_key_size, GFP_KERNEL); + if (!hash_node->x_axis_key) + goto alloc_x_key_failed; + + y_key_size = tbl_mgt->tbl_key.y_key_size; + hash_node->y_axis_key = + devm_kzalloc(tbl_mgt->tbl_key.dev, y_key_size, GFP_KERNEL); + if (!hash_node->y_axis_key) + goto alloc_y_key_failed; + + key_size = x_key_size + y_key_size; + key = devm_kzalloc(tbl_mgt->tbl_key.dev, key_size, GFP_KERNEL); + if (!key) + goto alloc_key_failed; + + data_size = tbl_mgt->tbl_key.data_size; + hash_node->data = + devm_kzalloc(tbl_mgt->tbl_key.dev, data_size, GFP_KERNEL); + if (!hash_node->data) + goto alloc_data_failed; + + memcpy(key, x_key, x_key_size); + memcpy(key + x_key_size, y_key, y_key_size); + memcpy(hash_node->x_axis_key, x_key, x_key_size); + memcpy(hash_node->y_axis_key, y_key, y_key_size); + memcpy(hash_node->data, data, data_size); + + hash_val = nbl_common_calc_hash_key(key, key_size, + tbl_mgt->tbl_key.bucket_size); + x_hash_val = nbl_common_calc_hash_key(x_key, x_key_size, + tbl_mgt->tbl_key.x_bucket_size); + y_hash_val = nbl_common_calc_hash_key(y_key, y_key_size, + tbl_mgt->tbl_key.y_bucket_size); + + devm_kfree(tbl_mgt->tbl_key.dev, key); + + if (tbl_mgt->tbl_key.lock_need) + mutex_lock(&tbl_mgt->lock); + + hlist_add_head(&hash_node->node, tbl_mgt->hash + hash_val); + hlist_add_head(&hash_node->x_axis_node, + tbl_mgt->x_axis_hash + x_hash_val); + hlist_add_head(&hash_node->y_axis_node, + tbl_mgt->y_axis_hash + y_hash_val); + + tbl_mgt->node_num++; + + if (tbl_mgt->tbl_key.lock_need) + mutex_unlock(&tbl_mgt->lock); + + return 0; + +alloc_data_failed: + devm_kfree(tbl_mgt->tbl_key.dev, key); +alloc_key_failed: + devm_kfree(tbl_mgt->tbl_key.dev, hash_node->y_axis_key); +alloc_y_key_failed: + devm_kfree(tbl_mgt->tbl_key.dev, hash_node->x_axis_key); +alloc_x_key_failed: + devm_kfree(tbl_mgt->tbl_key.dev, hash_node); + + return -1; +} + +/* + * get a hash node, return the data if node exist + */ +void *nbl_common_get_hash_xy_node(void *priv, void *x_key, void *y_key) +{ + struct nbl_hash_xy_tbl_mgt *tbl_mgt = + (struct nbl_hash_xy_tbl_mgt *)priv; + struct nbl_hash_entry_xy_node *hash_node; + struct hlist_head *head; + void *data = NULL; + void *key; + u32 hash_val; + u16 key_size, x_key_size, y_key_size; + + x_key_size = tbl_mgt->tbl_key.x_key_size; + y_key_size = tbl_mgt->tbl_key.y_key_size; + key_size = x_key_size + y_key_size; + key = devm_kzalloc(tbl_mgt->tbl_key.dev, key_size, GFP_KERNEL); + if (!key) + return NULL; + + memcpy(key, x_key, x_key_size); + memcpy(key + x_key_size, y_key, y_key_size); + hash_val = nbl_common_calc_hash_key(key, key_size, + tbl_mgt->tbl_key.bucket_size); + head = tbl_mgt->hash + hash_val; + + if (tbl_mgt->tbl_key.lock_need) + mutex_lock(&tbl_mgt->lock); + + hlist_for_each_entry(hash_node, head, node) + if (!memcmp(hash_node->x_axis_key, x_key, x_key_size) && + !memcmp(hash_node->y_axis_key, y_key, y_key_size)) { + data = hash_node->data; + break; + } + + if (tbl_mgt->tbl_key.lock_need) + mutex_unlock(&tbl_mgt->lock); + + devm_kfree(tbl_mgt->tbl_key.dev, key); + + return data; +} + +static void +nbl_common_remove_hash_xy_node(struct nbl_hash_xy_tbl_mgt *tbl_mgt, + struct nbl_hash_entry_xy_node *hash_node) +{ + hlist_del(&hash_node->node); + hlist_del(&hash_node->x_axis_node); + hlist_del(&hash_node->y_axis_node); + devm_kfree(tbl_mgt->tbl_key.dev, hash_node->x_axis_key); + devm_kfree(tbl_mgt->tbl_key.dev, hash_node->y_axis_key); + devm_kfree(tbl_mgt->tbl_key.dev, hash_node->data); + devm_kfree(tbl_mgt->tbl_key.dev, hash_node); + tbl_mgt->node_num--; +} + +/* + * free a hash node + */ +void nbl_common_free_hash_xy_node(void *priv, void *x_key, void *y_key) +{ + struct nbl_hash_xy_tbl_mgt *tbl_mgt = + (struct nbl_hash_xy_tbl_mgt *)priv; + struct nbl_hash_entry_xy_node *hash_node; + struct hlist_head *head; + void *key; + u32 hash_val; + + u16 key_size, x_key_size, y_key_size; + + x_key_size = tbl_mgt->tbl_key.x_key_size; + y_key_size = tbl_mgt->tbl_key.y_key_size; + key_size = x_key_size + y_key_size; + key = devm_kzalloc(tbl_mgt->tbl_key.dev, key_size, GFP_KERNEL); + if (!key) + return; + + memcpy(key, x_key, x_key_size); + memcpy(key + x_key_size, y_key, y_key_size); + hash_val = nbl_common_calc_hash_key(key, key_size, + tbl_mgt->tbl_key.bucket_size); + head = tbl_mgt->hash + hash_val; + + if (tbl_mgt->tbl_key.lock_need) + mutex_lock(&tbl_mgt->lock); + + hlist_for_each_entry(hash_node, head, node) + if (!memcmp(hash_node->x_axis_key, x_key, x_key_size) && + !memcmp(hash_node->y_axis_key, y_key, y_key_size)) { + break; + } + + if (hash_node) + nbl_common_remove_hash_xy_node(tbl_mgt, hash_node); + + if (tbl_mgt->tbl_key.lock_need) + mutex_unlock(&tbl_mgt->lock); + + devm_kfree(tbl_mgt->tbl_key.dev, key); +} + +/* 0: the node accord with the match condition */ +static int +nbl_common_match_hash_xy_node(struct nbl_hash_xy_tbl_mgt *tbl_mgt, + struct nbl_hash_xy_tbl_scan_key *key, + struct nbl_hash_entry_xy_node *hash_node) +{ + int ret = 0; + + if (key->match_func) { + ret = key->match_func(key->match_condition, + hash_node->x_axis_key, + hash_node->y_axis_key, hash_node->data); + if (ret) + return ret; + } + + if (key->action_func) + key->action_func(key->action_priv, hash_node->x_axis_key, + hash_node->y_axis_key, hash_node->data); + + if (key->op_type == NBL_HASH_TBL_OP_DELETE) + nbl_common_remove_hash_xy_node(tbl_mgt, hash_node); + + return 0; +} + +/* + * scan by x_axis or y_aixs or none, and return the match node number + */ +u16 nbl_common_scan_hash_xy_node(void *priv, + struct nbl_hash_xy_tbl_scan_key *key) +{ + struct nbl_hash_xy_tbl_mgt *tbl = + (struct nbl_hash_xy_tbl_mgt *)priv; + struct nbl_hash_entry_xy_node *hash_node; + struct hlist_node *safe_node; + struct hlist_head *head; + int ret; + u32 i; + u32 hash_val; + u16 x_key_size; + u16 y_key_size; + u16 node_num = 0; + + if (tbl->tbl_key.lock_need) + mutex_lock(&tbl->lock); + + if (key->scan_type == NBL_HASH_TBL_X_AXIS_SCAN) { + x_key_size = tbl->tbl_key.x_key_size; + hash_val = nbl_common_calc_hash_key(key->x_key, x_key_size, + tbl->tbl_key.x_bucket_size); + head = tbl->x_axis_hash + hash_val; + hlist_for_each_entry_safe(hash_node, safe_node, head, + x_axis_node) { + if (!memcmp(hash_node->x_axis_key, key->x_key, + x_key_size)) { + ret = nbl_common_match_hash_xy_node(tbl, key, + hash_node); + if (!ret) { + node_num++; + if (key->only_query_exist) + break; + } + } + } + } else if (key->scan_type == NBL_HASH_TBL_Y_AXIS_SCAN) { + y_key_size = tbl->tbl_key.y_key_size; + hash_val = nbl_common_calc_hash_key(key->y_key, y_key_size, + tbl->tbl_key.y_bucket_size); + head = tbl->y_axis_hash + hash_val; + hlist_for_each_entry_safe(hash_node, safe_node, head, + y_axis_node) { + if (!memcmp(hash_node->y_axis_key, key->y_key, + y_key_size)) { + ret = nbl_common_match_hash_xy_node(tbl, key, + hash_node); + if (!ret) { + node_num++; + if (key->only_query_exist) + break; + } + } + } + } else { + for (i = 0; i < tbl->tbl_key.bucket_size; i++) { + head = tbl->hash + i; + hlist_for_each_entry_safe(hash_node, safe_node, head, + node) { + ret = nbl_common_match_hash_xy_node(tbl, key, + hash_node); + if (!ret) + node_num++; + } + } + } + + if (tbl->tbl_key.lock_need) + mutex_unlock(&tbl->lock); + + return node_num; +} + +void nbl_common_rm_hash_xy_table(void *priv, + struct nbl_hash_xy_tbl_del_key *key) +{ + struct nbl_hash_xy_tbl_mgt *tbl_mgt = + (struct nbl_hash_xy_tbl_mgt *)priv; + struct nbl_hash_entry_xy_node *hash_node; + struct hlist_node *safe_node; + struct hlist_head *head; + struct device *dev; + u32 i; + + if (!priv) + return; + + if (tbl_mgt->tbl_key.lock_need) + mutex_lock(&tbl_mgt->lock); + + for (i = 0; i < tbl_mgt->tbl_key.bucket_size; i++) { + head = tbl_mgt->hash + i; + hlist_for_each_entry_safe(hash_node, safe_node, head, node) { + if (key->action_func) + key->action_func(key->action_priv, + hash_node->x_axis_key, + hash_node->y_axis_key, + hash_node->data); + nbl_common_remove_hash_xy_node(tbl_mgt, hash_node); + } + } + + devm_kfree(tbl_mgt->tbl_key.dev, tbl_mgt->hash); + devm_kfree(tbl_mgt->tbl_key.dev, tbl_mgt->x_axis_hash); + devm_kfree(tbl_mgt->tbl_key.dev, tbl_mgt->y_axis_hash); + + if (tbl_mgt->tbl_key.lock_need) + mutex_unlock(&tbl_mgt->lock); + + dev = tbl_mgt->tbl_key.dev; + devm_kfree(dev, tbl_mgt); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h new file mode 100644 index 000000000000..efb9eb410546 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_common/nbl_common.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_COMMON_H_ +#define _NBL_COMMON_H_ + +#include "nbl_def_common.h" + +/* + * the key_hash size is index_size/NBL_INDEX_HASH_DIVISOR. eg index_size is + * 1024, the key_hash size is 1024/16 = 64 + */ +#define NBL_INDEX_HASH_DIVISOR 16 + +/* list only need one bucket size */ +#define NBL_HASH_TBL_LIST_BUCKET_SIZE 1 + +struct nbl_hash_tbl_mgt { + struct nbl_hash_tbl_key tbl_key; + struct hlist_head *hash; + struct mutex lock; /* support multi thread */ + u16 node_num; +}; + +struct nbl_hash_xy_tbl_mgt { + struct nbl_hash_xy_tbl_key tbl_key; + struct hlist_head *hash; + struct hlist_head *x_axis_hash; + struct hlist_head *y_axis_hash; + struct mutex lock; /* support multi thread */ + u16 node_num; +}; + +/* it used for y_axis no necessay */ +struct nbl_hash_entry_node { + struct hlist_node node; + void *key; + void *data; +}; + +/* it used for y_axis no necessay */ +struct nbl_hash_entry_xy_node { + struct hlist_node node; + struct hlist_node x_axis_node; + struct hlist_node y_axis_node; + void *x_axis_key; + void *y_axis_key; + void *data; +}; + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index 33ed810ec7d0..fe83bd9f524c 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -9,6 +9,7 @@ #include <linux/pci.h> #include "nbl_product_base.h" +#include "nbl_def_channel.h" #include "nbl_def_hw.h" #include "nbl_def_common.h" @@ -18,7 +19,10 @@ #define NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter) ((adapter)->product_base_ops) #define NBL_ADAP_TO_HW_MGT(adapter) ((adapter)->core.hw_mgt) +#define NBL_ADAP_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt) #define NBL_ADAP_TO_HW_OPS_TBL(adapter) ((adapter)->intf.hw_ops_tbl) +#define NBL_ADAP_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl) + #define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1) #define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT) @@ -39,6 +43,7 @@ enum { struct nbl_interface { struct nbl_hw_ops_tbl *hw_ops_tbl; + struct nbl_channel_ops_tbl *channel_ops_tbl; }; struct nbl_core { diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c index 40701ff147e2..bf7c95ea33da 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c @@ -6,7 +6,266 @@ #include "nbl_hw_leonis.h" +static void nbl_hw_update_mailbox_queue_tail_ptr(void *priv, u16 tail_ptr, + u8 txrx) +{ + /* local_qid 0 and 1 denote rx and tx queue respectively */ + u32 local_qid = txrx; + u32 value = ((u32)tail_ptr << 16) | local_qid; + + /* wmb for doorbell */ + wmb(); + nbl_mbx_wr32(priv, NBL_MAILBOX_NOTIFY_ADDR, value); +} + +static void nbl_hw_config_mailbox_rxq(void *priv, dma_addr_t dma_addr, + int size_bwid) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_rx_table = { 0 }; + + qinfo_cfg_rx_table.queue_rst = 1; + nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR, + (u8 *)&qinfo_cfg_rx_table, + sizeof(qinfo_cfg_rx_table)); + + qinfo_cfg_rx_table.queue_base_addr_l = (u32)(dma_addr & 0xFFFFFFFF); + qinfo_cfg_rx_table.queue_base_addr_h = (u32)(dma_addr >> 32); + qinfo_cfg_rx_table.queue_size_bwind = (u32)size_bwid; + qinfo_cfg_rx_table.queue_rst = 0; + qinfo_cfg_rx_table.queue_en = 1; + nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR, + (u8 *)&qinfo_cfg_rx_table, + sizeof(qinfo_cfg_rx_table)); +} + +static void nbl_hw_config_mailbox_txq(void *priv, dma_addr_t dma_addr, + int size_bwid) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tx_table = { 0 }; + + qinfo_cfg_tx_table.queue_rst = 1; + nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR, + (u8 *)&qinfo_cfg_tx_table, + sizeof(qinfo_cfg_tx_table)); + + qinfo_cfg_tx_table.queue_base_addr_l = (u32)(dma_addr & 0xFFFFFFFF); + qinfo_cfg_tx_table.queue_base_addr_h = (u32)(dma_addr >> 32); + qinfo_cfg_tx_table.queue_size_bwind = (u32)size_bwid; + qinfo_cfg_tx_table.queue_rst = 0; + qinfo_cfg_tx_table.queue_en = 1; + nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR, + (u8 *)&qinfo_cfg_tx_table, + sizeof(qinfo_cfg_tx_table)); +} + +static void nbl_hw_stop_mailbox_rxq(void *priv) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_rx_table = { 0 }; + + nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR, + (u8 *)&qinfo_cfg_rx_table, + sizeof(qinfo_cfg_rx_table)); +} + +static void nbl_hw_stop_mailbox_txq(void *priv) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tx_table = { 0 }; + + nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR, + (u8 *)&qinfo_cfg_tx_table, + sizeof(qinfo_cfg_tx_table)); +} + +static u16 nbl_hw_get_mailbox_rx_tail_ptr(void *priv) +{ + struct nbl_mailbox_qinfo_cfg_dbg_tbl cfg_dbg_tbl = { 0 }; + + nbl_hw_read_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_DBG_TABLE_ADDR, + (u8 *)&cfg_dbg_tbl, sizeof(cfg_dbg_tbl)); + return cfg_dbg_tbl.rx_tail_ptr; +} + +static bool nbl_hw_check_mailbox_dma_err(void *priv, bool tx) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tbl = { 0 }; + u64 addr; + + if (tx) + addr = NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR; + else + addr = NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR; + + nbl_hw_read_mbx_regs(priv, addr, (u8 *)&qinfo_cfg_tbl, + sizeof(qinfo_cfg_tbl)); + return !!qinfo_cfg_tbl.dif_err; +} + +static u32 nbl_hw_get_host_pf_mask(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + u32 data; + + nbl_hw_rd_regs(hw_mgt, NBL_PCIE_HOST_K_PF_MASK_REG, (u8 *)&data, + sizeof(data)); + return data; +} + +static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus, + u16 devid, u16 function) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_mailbox_qinfo_map_table mb_qinfo_map; + + memset(&mb_qinfo_map, 0, sizeof(mb_qinfo_map)); + mb_qinfo_map.function = function; + mb_qinfo_map.devid = devid; + mb_qinfo_map.bus = bus; + mb_qinfo_map.msix_idx_valid = 0; + nbl_hw_wr_regs(hw_mgt, NBL_MAILBOX_QINFO_MAP_REG_ARR(func_id), + (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map)); +} + +static void nbl_hw_config_adminq_rxq(void *priv, dma_addr_t dma_addr, + int size_bwid) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_rx_table = { 0 }; + + qinfo_cfg_rx_table.queue_rst = 1; + nbl_hw_write_mbx_regs(priv, NBL_ADMINQ_QINFO_CFG_RX_TABLE_ADDR, + (u8 *)&qinfo_cfg_rx_table, + sizeof(qinfo_cfg_rx_table)); + + qinfo_cfg_rx_table.queue_base_addr_l = (u32)(dma_addr & 0xFFFFFFFF); + qinfo_cfg_rx_table.queue_base_addr_h = (u32)(dma_addr >> 32); + qinfo_cfg_rx_table.queue_size_bwind = (u32)size_bwid; + qinfo_cfg_rx_table.queue_rst = 0; + qinfo_cfg_rx_table.queue_en = 1; + nbl_hw_write_mbx_regs(priv, NBL_ADMINQ_QINFO_CFG_RX_TABLE_ADDR, + (u8 *)&qinfo_cfg_rx_table, + sizeof(qinfo_cfg_rx_table)); +} + +static void nbl_hw_config_adminq_txq(void *priv, dma_addr_t dma_addr, + int size_bwid) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tx_table = { 0 }; + + qinfo_cfg_tx_table.queue_rst = 1; + nbl_hw_write_mbx_regs(priv, NBL_ADMINQ_QINFO_CFG_TX_TABLE_ADDR, + (u8 *)&qinfo_cfg_tx_table, + sizeof(qinfo_cfg_tx_table)); + + qinfo_cfg_tx_table.queue_base_addr_l = (u32)(dma_addr & 0xFFFFFFFF); + qinfo_cfg_tx_table.queue_base_addr_h = (u32)(dma_addr >> 32); + qinfo_cfg_tx_table.queue_size_bwind = (u32)size_bwid; + qinfo_cfg_tx_table.queue_rst = 0; + qinfo_cfg_tx_table.queue_en = 1; + nbl_hw_write_mbx_regs(priv, NBL_ADMINQ_QINFO_CFG_TX_TABLE_ADDR, + (u8 *)&qinfo_cfg_tx_table, + sizeof(qinfo_cfg_tx_table)); +} + +static void nbl_hw_stop_adminq_rxq(void *priv) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_rx_table = { 0 }; + + nbl_hw_write_mbx_regs(priv, NBL_ADMINQ_QINFO_CFG_RX_TABLE_ADDR, + (u8 *)&qinfo_cfg_rx_table, + sizeof(qinfo_cfg_rx_table)); +} + +static void nbl_hw_stop_adminq_txq(void *priv) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tx_table = { 0 }; + + nbl_hw_write_mbx_regs(priv, NBL_ADMINQ_QINFO_CFG_TX_TABLE_ADDR, + (u8 *)&qinfo_cfg_tx_table, + sizeof(qinfo_cfg_tx_table)); +} + +static void nbl_hw_cfg_adminq_qinfo(void *priv, u16 bus, u16 devid, + u16 function) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_adminq_qinfo_map_table adminq_qinfo_map = {0}; + + memset(&adminq_qinfo_map, 0, sizeof(adminq_qinfo_map)); + adminq_qinfo_map.function = function; + adminq_qinfo_map.devid = devid; + adminq_qinfo_map.bus = bus; + + nbl_hw_write_mbx_regs(hw_mgt, NBL_ADMINQ_MSIX_MAP_TABLE_ADDR, + (u8 *)&adminq_qinfo_map, + sizeof(adminq_qinfo_map)); +} + +static void nbl_hw_update_adminq_queue_tail_ptr(void *priv, u16 tail_ptr, + u8 txrx) +{ + /* local_qid 0 and 1 denote rx and tx queue respectively */ + u32 local_qid = txrx; + u32 value = ((u32)tail_ptr << 16) | local_qid; + + /* wmb for doorbell */ + wmb(); + nbl_mbx_wr32(priv, NBL_ADMINQ_NOTIFY_ADDR, value); +} + +static bool nbl_hw_check_adminq_dma_err(void *priv, bool tx) +{ + struct nbl_mailbox_qinfo_cfg_table qinfo_cfg_tbl = { 0 }; + u64 addr; + + if (tx) + addr = NBL_ADMINQ_QINFO_CFG_TX_TABLE_ADDR; + else + addr = NBL_ADMINQ_QINFO_CFG_RX_TABLE_ADDR; + + nbl_hw_read_mbx_regs(priv, addr, (u8 *)&qinfo_cfg_tbl, + sizeof(qinfo_cfg_tbl)); + + if (!qinfo_cfg_tbl.rsv1 && !qinfo_cfg_tbl.rsv2 && qinfo_cfg_tbl.dif_err) + return true; + + return false; +} + +static void nbl_hw_set_hw_status(void *priv, enum nbl_hw_status hw_status) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + hw_mgt->hw_status = hw_status; +}; + +static enum nbl_hw_status nbl_hw_get_hw_status(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + return hw_mgt->hw_status; +}; + static struct nbl_hw_ops hw_ops = { + .update_mailbox_queue_tail_ptr = nbl_hw_update_mailbox_queue_tail_ptr, + .config_mailbox_rxq = nbl_hw_config_mailbox_rxq, + .config_mailbox_txq = nbl_hw_config_mailbox_txq, + .stop_mailbox_rxq = nbl_hw_stop_mailbox_rxq, + .stop_mailbox_txq = nbl_hw_stop_mailbox_txq, + .get_mailbox_rx_tail_ptr = nbl_hw_get_mailbox_rx_tail_ptr, + .check_mailbox_dma_err = nbl_hw_check_mailbox_dma_err, + .get_host_pf_mask = nbl_hw_get_host_pf_mask, + .cfg_mailbox_qinfo = nbl_hw_cfg_mailbox_qinfo, + + .config_adminq_rxq = nbl_hw_config_adminq_rxq, + .config_adminq_txq = nbl_hw_config_adminq_txq, + .stop_adminq_rxq = nbl_hw_stop_adminq_rxq, + .stop_adminq_txq = nbl_hw_stop_adminq_txq, + .cfg_adminq_qinfo = nbl_hw_cfg_adminq_qinfo, + .update_adminq_queue_tail_ptr = nbl_hw_update_adminq_queue_tail_ptr, + .check_adminq_dma_err = nbl_hw_check_adminq_dma_err, + + .set_hw_status = nbl_hw_set_hw_status, + .get_hw_status = nbl_hw_get_hw_status, + }; /* Structure starts here, adding an op should not modify anything below */ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h new file mode 100644 index 000000000000..aa28fbd589f1 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_channel.h @@ -0,0 +1,715 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEF_CHANNEL_H_ +#define _NBL_DEF_CHANNEL_H_ + +#include <linux/if_ether.h> +#include "nbl_include.h" + +#define NBL_CHAN_OPS_TBL_TO_OPS(chan_ops_tbl) ((chan_ops_tbl)->ops) +#define NBL_CHAN_OPS_TBL_TO_PRIV(chan_ops_tbl) ((chan_ops_tbl)->priv) + +#define NBL_CHAN_SEND(chan_send, dst_id, mesg_type, argument, arg_length,\ + response, resp_length, need_ack) \ +do { \ + typeof(chan_send) *__chan_send = &(chan_send); \ + __chan_send->dstid = (dst_id); \ + __chan_send->msg_type = (mesg_type); \ + __chan_send->arg = (argument); \ + __chan_send->arg_len = (arg_length); \ + __chan_send->resp = (response); \ + __chan_send->resp_len = (resp_length); \ + __chan_send->ack = (need_ack); \ +} while (0) + +#define NBL_CHAN_ACK(chan_ack, dst_id, mesg_type, msg_id, err_code, ack_data, \ + data_length) \ +do { \ + typeof(chan_ack) *__chan_ack = &(chan_ack); \ + __chan_ack->dstid = (dst_id); \ + __chan_ack->msg_type = (mesg_type); \ + __chan_ack->msgid = (msg_id); \ + __chan_ack->err = (err_code); \ + __chan_ack->data = (ack_data); \ + __chan_ack->data_len = (data_length); \ +} while (0) + +typedef void (*nbl_chan_resp)(void *, u16, u16, void *, u32); + +enum { + NBL_CHAN_RESP_OK, + NBL_CHAN_RESP_ERR, +}; + +enum nbl_chan_msg_type { + NBL_CHAN_MSG_ACK, + NBL_CHAN_MSG_ADD_MACVLAN, + NBL_CHAN_MSG_DEL_MACVLAN, + NBL_CHAN_MSG_ADD_MULTI_RULE, + NBL_CHAN_MSG_DEL_MULTI_RULE, + NBL_CHAN_MSG_SETUP_MULTI_GROUP, + NBL_CHAN_MSG_REMOVE_MULTI_GROUP, + NBL_CHAN_MSG_REGISTER_NET, + NBL_CHAN_MSG_UNREGISTER_NET, + NBL_CHAN_MSG_ALLOC_TXRX_QUEUES, + NBL_CHAN_MSG_FREE_TXRX_QUEUES, + NBL_CHAN_MSG_SETUP_QUEUE, + NBL_CHAN_MSG_REMOVE_ALL_QUEUES, + NBL_CHAN_MSG_CFG_DSCH, + NBL_CHAN_MSG_SETUP_CQS, + NBL_CHAN_MSG_REMOVE_CQS, + NBL_CHAN_MSG_CFG_QDISC_MQPRIO, + NBL_CHAN_MSG_CONFIGURE_MSIX_MAP, + NBL_CHAN_MSG_DESTROY_MSIX_MAP, + NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ, + NBL_CHAN_MSG_GET_GLOBAL_VECTOR, + NBL_CHAN_MSG_GET_VSI_ID, + NBL_CHAN_MSG_SET_PROSISC_MODE, + NBL_CHAN_MSG_GET_FIRMWARE_VERSION, + NBL_CHAN_MSG_GET_QUEUE_ERR_STATS, + NBL_CHAN_MSG_GET_COALESCE, + NBL_CHAN_MSG_SET_COALESCE, + NBL_CHAN_MSG_SET_SPOOF_CHECK_ADDR, + NBL_CHAN_MSG_SET_VF_SPOOF_CHECK, + NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE, + NBL_CHAN_MSG_GET_RXFH_INDIR, + NBL_CHAN_MSG_GET_RXFH_RSS_KEY, + NBL_CHAN_MSG_GET_RXFH_RSS_ALG_SEL, + NBL_CHAN_MSG_GET_HW_CAPS, + NBL_CHAN_MSG_GET_HW_STATE, + NBL_CHAN_MSG_REGISTER_RDMA, + NBL_CHAN_MSG_UNREGISTER_RDMA, + NBL_CHAN_MSG_GET_REAL_HW_ADDR, + NBL_CHAN_MSG_GET_REAL_BDF, + NBL_CHAN_MSG_GRC_PROCESS, + NBL_CHAN_MSG_SET_SFP_STATE, + NBL_CHAN_MSG_SET_ETH_LOOPBACK, + NBL_CHAN_MSG_CHECK_ACTIVE_VF, + NBL_CHAN_MSG_GET_PRODUCT_FLEX_CAP, + NBL_CHAN_MSG_ALLOC_KTLS_TX_INDEX, + NBL_CHAN_MSG_FREE_KTLS_TX_INDEX, + NBL_CHAN_MSG_CFG_KTLS_TX_KEYMAT, + NBL_CHAN_MSG_ALLOC_KTLS_RX_INDEX, + NBL_CHAN_MSG_FREE_KTLS_RX_INDEX, + NBL_CHAN_MSG_CFG_KTLS_RX_KEYMAT, + NBL_CHAN_MSG_CFG_KTLS_RX_RECORD, + NBL_CHAN_MSG_ADD_KTLS_RX_FLOW, + NBL_CHAN_MSG_DEL_KTLS_RX_FLOW, + NBL_CHAN_MSG_ALLOC_IPSEC_TX_INDEX, + NBL_CHAN_MSG_FREE_IPSEC_TX_INDEX, + NBL_CHAN_MSG_ALLOC_IPSEC_RX_INDEX, + NBL_CHAN_MSG_FREE_IPSEC_RX_INDEX, + NBL_CHAN_MSG_CFG_IPSEC_TX_SAD, + NBL_CHAN_MSG_CFG_IPSEC_RX_SAD, + NBL_CHAN_MSG_ADD_IPSEC_TX_FLOW, + NBL_CHAN_MSG_DEL_IPSEC_TX_FLOW, + NBL_CHAN_MSG_ADD_IPSEC_RX_FLOW, + NBL_CHAN_MSG_DEL_IPSEC_RX_FLOW, + NBL_CHAN_MSG_NOTIFY_IPSEC_HARD_EXPIRE, + NBL_CHAN_MSG_GET_MBX_IRQ_NUM, + NBL_CHAN_MSG_CLEAR_FLOW, + NBL_CHAN_MSG_CLEAR_QUEUE, + NBL_CHAN_MSG_GET_ETH_ID, + NBL_CHAN_MSG_SET_OFFLOAD_STATUS, + + NBL_CHAN_MSG_INIT_OFLD, + NBL_CHAN_MSG_INIT_CMDQ, + NBL_CHAN_MSG_DESTROY_CMDQ, + NBL_CHAN_MSG_RESET_CMDQ, + NBL_CHAN_MSG_INIT_FLOW, + NBL_CHAN_MSG_DEINIT_FLOW, + NBL_CHAN_MSG_OFFLOAD_FLOW_RULE, + NBL_CHAN_MSG_GET_ACL_SWITCH, + NBL_CHAN_MSG_GET_VSI_GLOBAL_QUEUE_ID, + NBL_CHAN_MSG_INIT_REP, + NBL_CHAN_MSG_GET_LINE_RATE_INFO, + + NBL_CHAN_MSG_REGISTER_NET_REP, + NBL_CHAN_MSG_UNREGISTER_NET_REP, + NBL_CHAN_MSG_REGISTER_ETH_REP, + NBL_CHAN_MSG_UNREGISTER_ETH_REP, + NBL_CHAN_MSG_REGISTER_UPCALL_PORT, + NBL_CHAN_MSG_UNREGISTER_UPCALL_PORT, + NBL_CHAN_MSG_GET_PORT_STATE, + NBL_CHAN_MSG_SET_PORT_ADVERTISING, + NBL_CHAN_MSG_GET_MODULE_INFO, + NBL_CHAN_MSG_GET_MODULE_EEPROM, + NBL_CHAN_MSG_GET_LINK_STATE, + NBL_CHAN_MSG_NOTIFY_LINK_STATE, + + NBL_CHAN_MSG_GET_QUEUE_CXT, + NBL_CHAN_MSG_CFG_LOG, + NBL_CHAN_MSG_INIT_VDPAQ, + NBL_CHAN_MSG_DESTROY_VDPAQ, + NBL_CHAN_GET_UPCALL_PORT, + NBL_CHAN_MSG_NOTIFY_ETH_REP_LINK_STATE, + NBL_CHAN_MSG_SET_ETH_MAC_ADDR, + NBL_CHAN_MSG_GET_FUNCTION_ID, + NBL_CHAN_MSG_GET_CHIP_TEMPERATURE, + + NBL_CHAN_MSG_DISABLE_HW_FLOW, + NBL_CHAN_MSG_ENABLE_HW_FLOW, + NBL_CHAN_MSG_SET_UPCALL_RULE, + NBL_CHAN_MSG_UNSET_UPCALL_RULE, + + NBL_CHAN_MSG_GET_REG_DUMP, + NBL_CHAN_MSG_GET_REG_DUMP_LEN, + + NBL_CHAN_MSG_CFG_LAG_HASH_ALGORITHM, + NBL_CHAN_MSG_CFG_LAG_MEMBER_FWD, + NBL_CHAN_MSG_CFG_LAG_MEMBER_LIST, + NBL_CHAN_MSG_CFG_LAG_MEMBER_UP_ATTR, + NBL_CHAN_MSG_ADD_LAG_FLOW, + NBL_CHAN_MSG_DEL_LAG_FLOW, + + NBL_CHAN_MSG_SWITCHDEV_INIT_CMDQ, + NBL_CHAN_MSG_SWITCHDEV_DEINIT_CMDQ, + NBL_CHAN_MSG_SET_TC_FLOW_INFO, + NBL_CHAN_MSG_UNSET_TC_FLOW_INFO, + NBL_CHAN_MSG_INIT_ACL, + NBL_CHAN_MSG_UNINIT_ACL, + + NBL_CHAN_MSG_CFG_LAG_MCC, + + NBL_CHAN_MSG_REGISTER_VSI2Q, + NBL_CHAN_MSG_SETUP_Q2VSI, + NBL_CHAN_MSG_REMOVE_Q2VSI, + NBL_CHAN_MSG_SETUP_RSS, + NBL_CHAN_MSG_REMOVE_RSS, + NBL_CHAN_MSG_GET_REP_QUEUE_INFO, + NBL_CHAN_MSG_CTRL_PORT_LED, + NBL_CHAN_MSG_NWAY_RESET, + NBL_CHAN_MSG_SET_INTL_SUPPRESS_LEVEL, + NBL_CHAN_MSG_GET_ETH_STATS, + NBL_CHAN_MSG_GET_MODULE_TEMPERATURE, + NBL_CHAN_MSG_GET_BOARD_INFO, + + NBL_CHAN_MSG_GET_P4_USED, + NBL_CHAN_MSG_GET_VF_BASE_VSI_ID, + + NBL_CHAN_MSG_ADD_LLDP_FLOW, + NBL_CHAN_MSG_DEL_LLDP_FLOW, + + NBL_CHAN_MSG_CFG_ETH_BOND_INFO, + NBL_CHAN_MSG_CFG_DUPPKT_MCC, + + NBL_CHAN_MSG_ADD_ND_UPCALL_FLOW, + NBL_CHAN_MSG_DEL_ND_UPCALL_FLOW, + + NBL_CHAN_MSG_GET_BOARD_ID, + + NBL_CHAN_MSG_SET_SHAPING_DPORT_VLD, + NBL_CHAN_MSG_SET_DPORT_FC_TH_VLD, + + NBL_CHAN_MSG_REGISTER_RDMA_BOND, + NBL_CHAN_MSG_UNREGISTER_RDMA_BOND, + + NBL_CHAN_MSG_RESTORE_NETDEV_QUEUE, + NBL_CHAN_MSG_RESTART_NETDEV_QUEUE, + NBL_CHAN_MSG_RESTORE_HW_QUEUE, + + NBL_CHAN_MSG_KEEP_ALIVE, + + NBL_CHAN_MSG_GET_BASE_MAC_ADDR, + + NBL_CHAN_MSG_CFG_BOND_SHAPING, + NBL_CHAN_MSG_CFG_BGID_BACK_PRESSURE, + + NBL_CHAN_MSG_ALLOC_KT_BLOCK, + NBL_CHAN_MSG_FREE_KT_BLOCK, + + NBL_CHAN_MSG_GET_USER_QUEUE_INFO, + NBL_CHAN_MSG_GET_ETH_BOND_INFO, + + NBL_CHAN_MSG_CLEAR_ACCEL_FLOW, + NBL_CHAN_MSG_SET_BRIDGE_MODE, + + NBL_CHAN_MSG_GET_VF_FUNCTION_ID, + NBL_CHAN_MSG_NOTIFY_LINK_FORCED, + + NBL_CHAN_MSG_SET_PMD_DEBUG, + + NBL_CHAN_MSG_REGISTER_FUNC_MAC, + NBL_CHAN_MSG_SET_TX_RATE, + + NBL_CHAN_MSG_REGISTER_FUNC_LINK_FORCED, + NBL_CHAN_MSG_GET_LINK_FORCED, + + NBL_CHAN_MSG_REGISTER_FUNC_VLAN, + + NBL_CHAN_MSG_GET_FD_FLOW, + NBL_CHAN_MSG_GET_FD_FLOW_CNT, + NBL_CHAN_MSG_GET_FD_FLOW_ALL, + NBL_CHAN_MSG_GET_FD_FLOW_MAX, + NBL_CHAN_MSG_REPLACE_FD_FLOW, + NBL_CHAN_MSG_REMOVE_FD_FLOW, + NBL_CHAN_MSG_CFG_FD_FLOW_STATE, + + NBL_CHAN_MSG_REGISTER_FUNC_RATE, + NBL_CHAN_MSG_NOTIFY_VLAN, + NBL_CHAN_MSG_GET_XDP_QUEUE_INFO, + + NBL_CHAN_MSG_STOP_ABNORMAL_SW_QUEUE, + NBL_CHAN_MSG_STOP_ABNORMAL_HW_QUEUE, + NBL_CHAN_MSG_NOTIFY_RESET_EVENT, + NBL_CHAN_MSG_ACK_RESET_EVENT, + NBL_CHAN_MSG_GET_VF_VSI_ID, + + NBL_CHAN_MSG_CONFIGURE_QOS, + NBL_CHAN_MSG_GET_PFC_BUFFER_SIZE, + NBL_CHAN_MSG_SET_PFC_BUFFER_SIZE, + NBL_CHAN_MSG_GET_VF_STATS, + NBL_CHAN_MSG_REGISTER_FUNC_TRUST, + NBL_CHAN_MSG_NOTIFY_TRUST, + NBL_CHAN_CHECK_VF_IS_ACTIVE, + NBL_CHAN_MSG_GET_ETH_ABNORMAL_STATS, + NBL_CHAN_MSG_GET_ETH_CTRL_STATS, + NBL_CHAN_MSG_GET_PAUSE_STATS, + NBL_CHAN_MSG_GET_ETH_MAC_STATS, + NBL_CHAN_MSG_GET_FEC_STATS, + NBL_CHAN_MSG_CFG_MULTI_MCAST_RULE, + NBL_CHAN_MSG_GET_LINK_DOWN_COUNT, + NBL_CHAN_MSG_GET_LINK_STATUS_OPCODE, + NBL_CHAN_MSG_GET_RMON_STATS, + NBL_CHAN_MSG_REGISTER_PF_NAME, + NBL_CHAN_MSG_GET_PF_NAME, + NBL_CHAN_MSG_CONFIGURE_RDMA_BW, + NBL_CHAN_MSG_SET_RATE_LIMIT, + NBL_CHAN_MSG_SET_TC_WGT, + NBL_CHAN_MSG_REMOVE_QUEUE, + NBL_CHAN_MSG_GET_MIRROR_TABLE_ID, + NBL_CHAN_MSG_CONFIGURE_MIRROR, + NBL_CHAN_MSG_CONFIGURE_MIRROR_TABLE, + NBL_CHAN_MSG_CLEAR_MIRROR_CFG, + NBL_CHAN_MSG_MIRROR_OUTPUTPORT_NOTIFY, + NBL_CHAN_MSG_CHECK_FLOWTABLE_SPEC, + NBL_CHAN_CHECK_VF_IS_VDPA, + NBL_CHAN_MSG_GET_VDPA_VF_STATS, + NBL_CHAN_MSG_SET_RX_RATE, + NBL_CHAN_GET_UVN_PKT_DROP_STATS, + NBL_CHAN_GET_USTORE_PKT_DROP_STATS, + NBL_CHAN_GET_USTORE_TOTAL_PKT_DROP_STATS, + NBL_CHAN_MSG_SET_WOL, + NBL_CHAN_MSG_INIT_VF_MSIX_MAP, + NBL_CHAN_MSG_GET_ST_NAME, + + NBL_CHAN_MSG_MTU_SET = 501, + NBL_CHAN_MSG_SET_RXFH_INDIR = 506, + NBL_CHAN_MSG_SET_RXFH_RSS_ALG_SEL = 508, + + /* mailbox msg end */ + NBL_CHAN_MSG_MAILBOX_MAX, + + /* adminq msg */ + NBL_CHAN_MSG_ADMINQ_GET_EMP_VERSION = + 0x8101, /* Deprecated, should not be used */ + NBL_CHAN_MSG_ADMINQ_GET_NVM_VERSION = 0x8102, + NBL_CHAN_MSG_ADMINQ_REBOOT = 0x8104, + NBL_CHAN_MSG_ADMINQ_FLR_NOTIFY = 0x8105, + NBL_CHAN_MSG_ADMINQ_NOTIFY_FW_RESET = 0x8106, + NBL_CHAN_MSG_ADMINQ_LOAD_P4 = 0x8107, + NBL_CHAN_MSG_ADMINQ_LOAD_P4_DEFAULT = 0x8108, + NBL_CHAN_MSG_ADMINQ_EXT_ALERT = 0x8109, + NBL_CHAN_MSG_ADMINQ_FLASH_ERASE = 0x8201, + NBL_CHAN_MSG_ADMINQ_FLASH_READ = 0x8202, + NBL_CHAN_MSG_ADMINQ_FLASH_WRITE = 0x8203, + NBL_CHAN_MSG_ADMINQ_FLASH_ACTIVATE = 0x8204, + NBL_CHAN_MSG_ADMINQ_RESOURCE_WRITE = 0x8205, + NBL_CHAN_MSG_ADMINQ_RESOURCE_READ = 0x8206, + NBL_CHAN_MSG_ADMINQ_REGISTER_WRITE = 0x8207, + NBL_CHAN_MSG_ADMINQ_REGISTER_READ = 0x8208, + NBL_CHAN_MSG_ADMINQ_GET_NVM_BANK_INDEX = 0x820B, + NBL_CHAN_MSG_ADMINQ_VERIFY_NVM_BANK = 0x820C, + NBL_CHAN_MSG_ADMINQ_FLASH_LOCK = 0x820D, + NBL_CHAN_MSG_ADMINQ_FLASH_UNLOCK = 0x820E, + NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES = 0x8300, + NBL_CHAN_MSG_ADMINQ_PORT_NOTIFY = 0x8301, + NBL_CHAN_MSG_ADMINQ_GET_MODULE_EEPROM = 0x8302, + NBL_CHAN_MSG_ADMINQ_GET_ETH_STATS = 0x8303, + NBL_CHAN_MSG_ADMINQ_GET_FEC_STATS = 0x8305, + + NBL_CHAN_MSG_ADMINQ_EMP_CONSOLE_WRITE = 0x8F01, + NBL_CHAN_MSG_ADMINQ_EMP_CONSOLE_READ = 0x8F02, + + NBL_CHAN_MSG_MAX, +}; + +#define NBL_CHAN_ADMINQ_FUNCTION_ID (0xFFFF) + +struct nbl_chan_vsi_qid_info { + u16 vsi_id; + u16 local_qid; +}; + +#define NBL_CHANNEL_FREEZE_FAILED_CNT 3 + +enum nbl_chan_state { + NBL_CHAN_INTERRUPT_READY, + NBL_CHAN_RESETTING, + NBL_CHAN_ABNORMAL, + NBL_CHAN_STATE_NBITS +}; + +struct nbl_chan_param_add_macvlan { + u8 mac[ETH_ALEN]; + u16 vlan; + u16 vsi; +}; + +struct nbl_chan_param_del_macvlan { + u8 mac[ETH_ALEN]; + u16 vlan; + u16 vsi; +}; + +struct nbl_chan_param_cfg_multi_mcast { + u16 vsi; + u16 enable; +}; + +struct nbl_chan_param_register_net_info { + u16 pf_bdf; + u64 vf_bar_start; + u64 vf_bar_size; + u16 total_vfs; + u16 offset; + u16 stride; + u64 pf_bar_start; + u16 is_vdpa; +}; + +struct nbl_chan_param_alloc_txrx_queues { + u16 vsi_id; + u16 queue_num; +}; + +struct nbl_chan_param_register_vsi2q { + u16 vsi_index; + u16 vsi_id; + u16 queue_offset; + u16 queue_num; +}; + +struct nbl_chan_param_setup_queue { + struct nbl_txrx_queue_param queue_param; + bool is_tx; +}; + +struct nbl_chan_param_cfg_dsch { + u16 vsi_id; + bool vld; +}; + +struct nbl_chan_param_setup_cqs { + u16 vsi_id; + u16 real_qps; + bool rss_indir_set; +}; + +struct nbl_chan_param_set_promisc_mode { + u16 vsi_id; + u16 mode; +}; + +struct nbl_chan_param_init_vf_msix_map { + u16 func_id; + bool enable; +}; + +struct nbl_chan_param_cfg_msix_map { + u16 num_net_msix; + u16 num_others_msix; + u16 msix_mask_en; +}; + +struct nbl_chan_param_enable_mailbox_irq { + u16 vector_id; + bool enable_msix; +}; + +struct nbl_chan_param_get_global_vector { + u16 vsi_id; + u16 vector_id; +}; + +struct nbl_chan_param_get_vsi_id { + u16 vsi_id; + u16 type; +}; + +struct nbl_chan_param_get_eth_id { + u16 vsi_id; + u8 eth_mode; + u8 eth_id; + u8 logic_eth_id; +}; + +struct nbl_chan_param_get_queue_info { + u16 queue_num; + u16 queue_size; +}; + +struct nbl_chan_result_get_real_bdf { + u8 bus; + u8 dev; + u8 function; +}; + +struct nbl_chan_resource_write_param { + u32 resid; + u32 offset; + u32 len; + u8 data[]; +}; + +struct nbl_chan_resource_read_param { + u32 resid; + u32 offset; + u32 len; +}; + +struct nbl_chan_adminq_reg_read_param { + u32 reg; +}; + +struct nbl_chan_adminq_reg_write_param { + u32 reg; + u32 value; +}; + +struct nbl_chan_param_set_sfp_state { + u8 eth_id; + u8 state; +}; + +struct nbl_chan_param_module_eeprom_info { + u8 eth_id; + u8 i2c_address; + u8 page; + u8 bank; + u32 write:1; + u32 version:2; + u32 rsvd:29; + u16 offset; + u16 length; +#define NBL_MODULE_EEPRO_WRITE_MAX_LEN (4) + u8 data[NBL_MODULE_EEPRO_WRITE_MAX_LEN]; +}; + +struct nbl_chan_param_set_rxfh_indir { + u16 vsi_id; + u32 indir_size; +#define NBL_RXFH_INDIR_MAX_SIZE (512) + u32 indir[NBL_RXFH_INDIR_MAX_SIZE]; +}; + +struct nbl_chan_param_set_eth_mac_addr { + u8 mac[ETH_ALEN]; + u8 eth_id; +}; + +struct nbl_chan_param_get_private_stat_data { + u32 eth_id; + u32 data_len; +}; + +struct nbl_chan_param_restore_queue { + u16 local_queue_id; + int type; +}; + +struct nbl_chan_param_restart_queue { + u16 local_queue_id; + int type; +}; + +struct nbl_chan_param_stop_abnormal_sw_queue { + u16 local_queue_id; + int type; +}; + +struct nbl_chan_param_stop_abnormal_hw_queue { + u16 vsi_id; + u16 local_queue_id; + int type; +}; + +struct nbl_chan_param_get_vf_func_id { + u16 vsi_id; + int vf_id; +}; + +struct nbl_chan_param_get_vf_vsi_id { + u16 vsi_id; + int vf_id; +}; + +struct nbl_chan_param_notify_link_state { + u8 link_state; + u32 link_speed; +}; + +struct nbl_chan_param_set_mtu { + u16 vsi_id; + u16 mtu; +}; + +struct nbl_register_net_param { + u16 pf_bdf; + u64 vf_bar_start; + u64 vf_bar_size; + u16 total_vfs; + u16 offset; + u16 stride; + u64 pf_bar_start; + u16 is_vdpa; +}; + +struct nbl_register_net_result { + u16 tx_queue_num; + u16 rx_queue_num; + u16 queue_size; + u16 rdma_enable; + + u64 hw_features; + u64 features; + + u16 max_mtu; + u16 queue_offset; + + u8 mac[ETH_ALEN]; + u16 vlan_proto; + u16 vlan_tci; + u32 rate; + bool trusted; + + u64 vlan_features; + u64 hw_enc_features; +}; + +/* emp to ctrl dev notify */ +struct nbl_port_notify { + u32 id; + u32 speed; /* in 10 Mbps units */ + u8 link_state:1; /* 0:down, 1:up */ + u8 module_inplace:1; /* 0: not inplace, 1:inplace */ + u8 revd0:6; + u8 flow_ctrl; /* enum nbl_flow_ctrl */ + u8 fec; /* enum nbl_port_fec */ + u8 active_lanes; + u8 rsvd1[4]; + u64 advertising; /* enum nbl_port_cap */ + u64 lp_advertising; /* enum nbl_port_cap */ +}; + +#define NBL_EMP_LOG_MAX_SIZE (256) +struct nbl_emp_alert_log_event { + u64 uptime; + u8 level; + u8 data[256]; +}; + +#define NBL_EMP_ALERT_DATA_MAX_SIZE (4032) +struct nbl_chan_param_emp_alert_event { + u16 type; + u16 len; + u8 data[NBL_EMP_ALERT_DATA_MAX_SIZE]; +}; + +struct nbl_eth_link_info { + u8 link_status; + u32 link_speed; +}; + +struct nbl_board_port_info { + u8 eth_num; + u8 eth_speed; + u8 p4_version; + u8 rsv[5]; +}; + +enum nbl_fw_reset_type { + NBL_FW_HIGH_TEMP_RESET, + NBL_FW_RESET_TYPE_MAX, +}; + +struct nbl_chan_param_notify_fw_reset_info { + u16 type; /* enum nbl_fw_reset_type */ + u16 len; + u16 data[]; +}; + +struct nbl_chan_param_pf_name { + u16 vsi_id; + char dev_name[IFNAMSIZ]; +}; + +struct nbl_chan_param_check_flow_spec { + u16 vlan_list_cnt; + u16 unicast_mac_cnt; + u16 multi_mac_cnt; +}; + +struct nbl_chan_param_set_wol { + u8 eth_id; + bool enable; +}; + +struct nbl_chan_send_info { + void *arg; + size_t arg_len; + void *resp; + size_t resp_len; + u16 dstid; + u16 msg_type; + u16 ack; + u16 ack_len; +}; + +struct nbl_chan_ack_info { + void *data; + int err; + u32 data_len; + u16 dstid; + u16 msg_type; + u16 msgid; +}; + +enum nbl_channel_type { + NBL_CHAN_TYPE_MAILBOX, + NBL_CHAN_TYPE_ADMINQ, + NBL_CHAN_TYPE_MAX +}; + +struct nbl_channel_ops { + int (*send_msg)(void *priv, struct nbl_chan_send_info *chan_send); + int (*send_ack)(void *priv, struct nbl_chan_ack_info *chan_ack); + int (*register_msg)(void *priv, u16 msg_type, nbl_chan_resp func, + void *callback_priv); + void (*unregister_msg)(void *priv, u16 msg_type); + int (*cfg_chan_qinfo_map_table)(void *priv, u8 chan_type); + bool (*check_queue_exist)(void *priv, u8 chan_type); + int (*setup_queue)(void *priv, u8 chan_type); + int (*teardown_queue)(void *priv, u8 chan_type); + void (*clean_queue_subtask)(void *priv, u8 chan_type); + int (*setup_keepalive)(void *priv, u16 dest_id, u8 chan_type); + void (*remove_keepalive)(void *priv, u8 chan_type); + void (*register_chan_task)(void *priv, u8 chan_type, + struct work_struct *task); + void (*set_queue_state)(void *priv, enum nbl_chan_state state, + u8 chan_type, u8 set); +}; + +struct nbl_channel_ops_tbl { + struct nbl_channel_ops *ops; + void *priv; +}; + +int nbl_chan_init_common(void *p, struct nbl_init_param *param); +void nbl_chan_remove_common(void *p); + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h index 3533b853abc4..57d88ef0fb6d 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h @@ -105,4 +105,191 @@ struct nbl_common_info { bool wol_ena; }; +struct nbl_hash_tbl_key { + struct device *dev; + u16 key_size; + u16 data_size; /* no include key or node member */ + u16 bucket_size; + u8 lock_need; /* true: support multi thread operation */ + u8 resv; +}; + +#define NBL_HASH_TBL_KEY_INIT(key, dev_arg, key_size_arg, data_size_arg,\ + bucket_size_arg, lock_need_args) \ +do { \ + typeof(key) __key = key; \ + __key->dev = dev_arg; \ + __key->key_size = key_size_arg; \ + __key->data_size = data_size_arg; \ + __key->bucket_size = bucket_size_arg; \ + __key->lock_need = lock_need_args; \ + __key->resv = 0; \ +} while (0) + +enum nbl_hash_tbl_op_type { + NBL_HASH_TBL_OP_SHOW = 0, + NBL_HASH_TBL_OP_DELETE, +}; + +struct nbl_hash_tbl_del_key { + void *action_priv; + void (*action_func)(void *priv, void *key, void *data); +}; + +#define NBL_HASH_TBL_DEL_KEY_INIT(key, priv_arg, act_func_arg) \ +do { \ + typeof(key) __key = key; \ + __key->action_priv = priv_arg; \ + __key->action_func = act_func_arg; \ +} while (0) + +struct nbl_hash_tbl_scan_key { + enum nbl_hash_tbl_op_type op_type; + void *match_condition; + /* match ret value must be 0 if the node accord with the condition */ + int (*match_func)(void *condition, void *key, void *data); + void *action_priv; + void (*action_func)(void *priv, void *key, void *data); +}; + +#define NBL_HASH_TBL_SCAN_KEY_INIT(key, op_type_arg, con_arg, match_func_arg,\ + priv_arg, act_func_arg) \ +do { \ + typeof(key) __key = key; \ + __key->op_type = op_type_arg; \ + __key->match_condition = con_arg; \ + __key->match_func = match_func_arg; \ + __key->action_priv = priv_arg; \ + __key->action_func = act_func_arg; \ +} while (0) + +struct nbl_hash_xy_tbl_key { + struct device *dev; + u16 x_key_size; + u16 y_key_size; /* y_key_len = key_len - x_key_len */ + u16 data_size; /* no include key or node member */ + u16 bucket_size; + u16 x_bucket_size; + u16 y_bucket_size; + u8 lock_need; /* true: support multi thread operation */ + u8 resv[3]; +}; + +#define NBL_HASH_XY_TBL_KEY_INIT(key, dev_arg, x_key_size_arg, y_key_size_arg,\ + data_size_arg, bucket_size_args, \ + x_bucket_size_arg, y_bucket_size_arg, \ + lock_need_args) \ +do { \ + typeof(key) __key = key; \ + __key->dev = dev_arg; \ + __key->x_key_size = x_key_size_arg; \ + __key->y_key_size = y_key_size_arg; \ + __key->data_size = data_size_arg; \ + __key->bucket_size = bucket_size_args; \ + __key->x_bucket_size = x_bucket_size_arg; \ + __key->y_bucket_size = y_bucket_size_arg; \ + __key->lock_need = lock_need_args; \ + memset(__key->resv, 0, sizeof(__key->resv)); \ +} while (0) + +enum nbl_hash_xy_tbl_scan_type { + NBL_HASH_TBL_ALL_SCAN = 0, + NBL_HASH_TBL_X_AXIS_SCAN, + NBL_HASH_TBL_Y_AXIS_SCAN, +}; + +/* true: only query the match one, eg. if x_axis: mac; y_axist: vlan*/ +/* + * member "only_query_exist" use + * if true: only query the match one, eg. if x_axis: mac; y_axis: vlan, + * if only to query the tbl has a gevin "mac", the nbl_hash_xy_tbl_scan_key + * struct use as flow: + * op_type = NBL_HASH_TBL_OP_SHOW; + * scan_type = NBL_HASH_TBL_X_AXIS_SCAN; + * only_query_exist = true; + * x_key = the mac_addr; + * y_key = NULL; + * match_func = NULL; + * action_func = NULL; + */ +struct nbl_hash_xy_tbl_scan_key { + enum nbl_hash_tbl_op_type op_type; + enum nbl_hash_xy_tbl_scan_type scan_type; + bool only_query_exist; + u8 resv[3]; + void *x_key; + void *y_key; + void *match_condition; + /* match ret value must be 0 if the node accord with the condition */ + int (*match_func)(void *condition, void *x_key, void *y_key, + void *data); + void *action_priv; + void (*action_func)(void *priv, void *x_key, void *y_key, void *data); +}; + +#define NBL_HASH_XY_TBL_SCAN_KEY_INIT(key, op_type_arg, scan_type_arg, \ + query_flag_arg, x_key_arg, y_key_arg,\ + con_arg, match_func_arg, priv_arg,\ + act_func_arg) \ +do { \ + typeof(key) __key = key; \ + __key->op_type = op_type_arg; \ + __key->scan_type = scan_type_arg; \ + __key->only_query_exist = query_flag_arg; \ + memset(__key->resv, 0, sizeof(__key->resv)); \ + __key->x_key = x_key_arg; \ + __key->y_key = y_key_arg; \ + __key->match_condition = con_arg; \ + __key->match_func = match_func_arg; \ + __key->action_priv = priv_arg; \ + __key->action_func = act_func_arg; \ +} while (0) + +struct nbl_hash_xy_tbl_del_key { + void *action_priv; + void (*action_func)(void *priv, void *x_key, void *y_key, void *data); +}; + +#define NBL_HASH_XY_TBL_DEL_KEY_INIT(key, priv_arg, act_func_arg) \ +do { \ + typeof(key) __key = key; \ + __key->action_priv = priv_arg; \ + __key->action_func = act_func_arg; \ +} while (0) + +void nbl_convert_mac(u8 *mac, u8 *reverse_mac); + +void nbl_common_queue_work(struct work_struct *task, bool ctrl_task); +void nbl_common_q_dwork(struct delayed_work *task, u32 msec, bool ctrl_task); +void nbl_common_q_dwork_keepalive(struct delayed_work *task, u32 msec); +void nbl_common_release_task(struct work_struct *task); +void nbl_common_alloc_task(struct work_struct *task, void *func); +void nbl_common_release_delayed_task(struct delayed_work *task); +void nbl_common_alloc_delayed_task(struct delayed_work *task, void *func); +void nbl_common_flush_task(struct work_struct *task); + +void nbl_common_destroy_wq(void); +int nbl_common_create_wq(void); +u32 nbl_common_pf_id_subtraction_mgtpf_id(struct nbl_common_info *common, + u32 pf_id); + +int nbl_common_find_free_idx(unsigned long *addr, u32 size, u32 idx_num, + u32 multiple); + +void *nbl_common_init_hash_table(struct nbl_hash_tbl_key *key); +void nbl_common_remove_hash_table(void *priv, struct nbl_hash_tbl_del_key *key); +int nbl_common_alloc_hash_node(void *priv, void *key, void *data, + void **out_data); +void *nbl_common_get_hash_node(void *priv, void *key); +void nbl_common_free_hash_node(void *priv, void *key); + +void *nbl_common_init_hash_xy_table(struct nbl_hash_xy_tbl_key *key); +void nbl_common_rm_hash_xy_table(void *priv, + struct nbl_hash_xy_tbl_del_key *key); +int nbl_common_alloc_hash_xy_node(void *priv, void *x_key, void *y_key, + void *data); +void *nbl_common_get_hash_xy_node(void *priv, void *x_key, void *y_key); +void nbl_common_free_hash_xy_node(void *priv, void *x_key, void *y_key); +u16 nbl_common_scan_hash_xy_node(void *priv, + struct nbl_hash_xy_tbl_scan_key *key); #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h index 6ac72e26ccd6..1096feea5ce6 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h @@ -10,6 +10,33 @@ #include "nbl_include.h" struct nbl_hw_ops { + void (*update_mailbox_queue_tail_ptr)(void *priv, u16 tail_ptr, + u8 txrx); + void (*config_mailbox_rxq)(void *priv, dma_addr_t dma_addr, + int size_bwid); + void (*config_mailbox_txq)(void *priv, dma_addr_t dma_addr, + int size_bwid); + void (*stop_mailbox_rxq)(void *priv); + void (*stop_mailbox_txq)(void *priv); + u16 (*get_mailbox_rx_tail_ptr)(void *priv); + bool (*check_mailbox_dma_err)(void *priv, bool tx); + u32 (*get_host_pf_mask)(void *priv); + + void (*cfg_mailbox_qinfo)(void *priv, u16 func_id, u16 bus, u16 devid, + u16 function); + void (*config_adminq_rxq)(void *priv, dma_addr_t dma_addr, + int size_bwid); + void (*config_adminq_txq)(void *priv, dma_addr_t dma_addr, + int size_bwid); + void (*stop_adminq_rxq)(void *priv); + void (*stop_adminq_txq)(void *priv); + void (*cfg_adminq_qinfo)(void *priv, u16 bus, u16 devid, u16 function); + void (*update_adminq_queue_tail_ptr)(void *priv, u16 tail_ptr, u8 txrx); + bool (*check_adminq_dma_err)(void *priv, bool tx); + + void (*set_hw_status)(void *priv, enum nbl_hw_status hw_status); + enum nbl_hw_status (*get_hw_status)(void *priv); + }; struct nbl_hw_ops_tbl { diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index e620feb382c1..64ac886f0ba2 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -13,6 +13,7 @@ #define NBL_DRIVER_NAME "nbl_core" #define NBL_MAX_PF 8 + #define NBL_NEXT_ID(id, max) \ ({ \ typeof(id) _id = (id); \ @@ -46,4 +47,70 @@ struct nbl_init_param { bool pci_using_dac; }; +struct nbl_txrx_queue_param { + u16 vsi_id; + u64 dma; + u64 avail; + u64 used; + u16 desc_num; + u16 local_queue_id; + u16 intr_en; + u16 intr_mask; + u16 global_vec_id; + u16 half_offload_en; + u16 split; + u16 extend_header; + u16 cxt; + u16 rxcsum; +}; + +struct nbl_qid_map_table { + u32 local_qid; + u32 notify_addr_l; + u32 notify_addr_h; + u32 global_qid; + u32 ctrlq_flag; +}; + +struct nbl_qid_map_param { + struct nbl_qid_map_table *qid_map; + u16 start; + u16 len; +}; + +struct nbl_queue_cfg_param { + /* queue args*/ + u64 desc; + u64 avail; + u64 used; + u16 size; + u16 extend_header; + u16 split; + u16 last_avail_idx; + u16 global_queue_id; + + /*interrupt args*/ + u16 global_vector; + u16 intr_en; + u16 intr_mask; + + /* dvn args */ + u16 tx; + + /* uvn args*/ + u16 rxcsum; + u16 half_offload_en; +}; + +enum nbl_fw_port_speed { + NBL_FW_PORT_SPEED_10G, + NBL_FW_PORT_SPEED_25G, + NBL_FW_PORT_SPEED_50G, + NBL_FW_PORT_SPEED_100G, +}; + +enum nbl_performance_mode { + NBL_QUIRKS_NO_TOE, + NBL_QUIRKS_UVN_PREFETCH_ALIGN, +}; #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index a93aa98f2316..3276dd2936ae 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -13,8 +13,8 @@ static struct nbl_product_base_ops nbl_product_base_ops[NBL_PRODUCT_MAX] = { .hw_remove = nbl_hw_remove_leonis, .res_init = NULL, .res_remove = NULL, - .chan_init = NULL, - .chan_remove = NULL, + .chan_init = nbl_chan_init_common, + .chan_remove = nbl_chan_remove_common, }, }; @@ -69,7 +69,12 @@ struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, if (ret) goto hw_init_fail; + ret = product_base_ops->chan_init(adapter, param); + if (ret) + goto chan_init_fail; return adapter; +chan_init_fail: + product_base_ops->hw_remove(adapter); hw_init_fail: devm_kfree(&pdev->dev, adapter); return NULL; @@ -82,6 +87,7 @@ void nbl_core_remove(struct nbl_adapter *adapter) dev = NBL_ADAP_TO_DEV(adapter); product_base_ops = NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter); + product_base_ops->chan_remove(adapter); product_base_ops->hw_remove(adapter); devm_kfree(dev, adapter); } -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 06/15] net/nebula-matrix: add resource layer definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (4 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 05/15] net/nebula-matrix: add channel layer definitions and implementation illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 07/15] net/nebula-matrix: add intr resource " illusion.wang ` (9 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list The Resource layer processes the entries/data of various modules within the processing chip to accomplish specific entry management operations, this describes the module business capabilities of the chip and the data it manages. The resource layer comprises the following sub-modules: common, adminq, interrupt, txrx, flow, queue, and vsi. Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 3 + .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 7 + .../nebula-matrix/nbl/nbl_hw/nbl_adminq.c | 110 ++ .../nebula-matrix/nbl/nbl_hw/nbl_adminq.h | 160 +++ .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 319 ++++++ .../nbl_hw_leonis/nbl_resource_leonis.c | 998 ++++++++++++++++++ .../nbl_hw_leonis/nbl_resource_leonis.h | 13 + .../nebula-matrix/nbl/nbl_hw/nbl_resource.c | 427 ++++++++ .../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 860 +++++++++++++++ .../nbl/nbl_include/nbl_def_common.h | 19 + .../nbl/nbl_include/nbl_def_hw.h | 17 +- .../nbl/nbl_include/nbl_def_resource.h | 183 ++++ .../nbl/nbl_include/nbl_include.h | 189 ++++ .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 11 +- 14 files changed, 3312 insertions(+), 4 deletions(-) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index db04128977d5..977544cd1b95 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -7,7 +7,10 @@ obj-$(CONFIG_NBL_CORE) := nbl_core.o nbl_core-objs += nbl_common/nbl_common.o \ nbl_channel/nbl_channel.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \ + nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \ + nbl_hw/nbl_resource.o \ + nbl_hw/nbl_adminq.o \ nbl_main.o # Provide include files diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index fe83bd9f524c..6c7e2549ff8b 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -11,6 +11,7 @@ #include "nbl_product_base.h" #include "nbl_def_channel.h" #include "nbl_def_hw.h" +#include "nbl_def_resource.h" #include "nbl_def_common.h" #define NBL_ADAP_TO_PDEV(adapter) ((adapter)->pdev) @@ -19,10 +20,15 @@ #define NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter) ((adapter)->product_base_ops) #define NBL_ADAP_TO_HW_MGT(adapter) ((adapter)->core.hw_mgt) +#define NBL_ADAP_TO_RES_MGT(adapter) ((adapter)->core.res_mgt) #define NBL_ADAP_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt) #define NBL_ADAP_TO_HW_OPS_TBL(adapter) ((adapter)->intf.hw_ops_tbl) +#define NBL_ADAP_TO_RES_OPS_TBL(adapter) ((adapter)->intf.resource_ops_tbl) #define NBL_ADAP_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl) +#define NBL_ADAPTER_TO_RES_PT_OPS(adapter) \ + (&(NBL_ADAP_TO_SERV_OPS_TBL(adapter)->pt_ops)) + #define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1) #define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT) @@ -43,6 +49,7 @@ enum { struct nbl_interface { struct nbl_hw_ops_tbl *hw_ops_tbl; + struct nbl_resource_ops_tbl *resource_ops_tbl; struct nbl_channel_ops_tbl *channel_ops_tbl; }; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c new file mode 100644 index 000000000000..2db160a92256 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c @@ -0,0 +1,110 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include "nbl_adminq.h" + +static int nbl_res_aq_set_sfp_state(void *priv, u8 eth_id, u8 state) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct nbl_chan_send_info chan_send; + struct nbl_port_key *param; + int param_len = 0; + u64 data = 0; + u64 key = 0; + int ret; + + param_len = sizeof(struct nbl_port_key) + 1 * sizeof(u64); + param = kzalloc(param_len, GFP_KERNEL); + if (!param) + return -ENOMEM; + key = NBL_PORT_KEY_MODULE_SWITCH; + if (state) + data = NBL_PORT_SFP_ON + (key << NBL_PORT_KEY_KEY_SHIFT); + else + data = NBL_PORT_SFP_OFF + (key << NBL_PORT_KEY_KEY_SHIFT); + + memset(param, 0, param_len); + param->id = eth_id; + param->subop = NBL_PORT_SUBOP_WRITE; + param->data[0] = data; + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, param, + param_len, NULL, 0, 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x, eth_id:%d, sfp %s\n", + ret, NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, + eth_info->logic_eth_id[eth_id], state ? "on" : "off"); + kfree(param); + return ret; + } + + kfree(param); + return 0; +} + +int nbl_res_open_sfp(struct nbl_resource_mgt *res_mgt, u8 eth_id) +{ + return nbl_res_aq_set_sfp_state(res_mgt, eth_id, NBL_SFP_MODULE_ON); +} + +static int nbl_res_aq_get_eth_mac_addr(void *priv, u8 *mac, u8 eth_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_chan_send_info chan_send; + struct nbl_port_key *param; + u64 data = 0, key = 0, result = 0; + int param_len = 0, i, ret; + u8 reverse_mac[ETH_ALEN]; + + param_len = sizeof(struct nbl_port_key) + 1 * sizeof(u64); + param = kzalloc(param_len, GFP_KERNEL); + if (!param) + return -ENOMEM; + key = NBL_PORT_KEY_MAC_ADDRESS; + + data += (key << NBL_PORT_KEY_KEY_SHIFT); + + memset(param, 0, param_len); + param->id = eth_id; + param->subop = NBL_PORT_SUBOP_READ; + param->data[0] = data; + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, param, + param_len, &result, sizeof(result), 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x, eth_id:%d\n", + ret, NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, + eth_info->logic_eth_id[eth_id]); + kfree(param); + return ret; + } + + memcpy(reverse_mac, &result, ETH_ALEN); + + /*convert mac address*/ + for (i = 0; i < ETH_ALEN; i++) + mac[i] = reverse_mac[ETH_ALEN - 1 - i]; + + kfree(param); + return 0; +} + +int nbl_res_get_eth_mac(struct nbl_resource_mgt *res_mgt, u8 *mac, u8 eth_id) +{ + return nbl_res_aq_get_eth_mac_addr(res_mgt, mac, eth_id); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.h new file mode 100644 index 000000000000..f4a214856d99 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.h @@ -0,0 +1,160 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_ADMINQ_H_ +#define _NBL_ADMINQ_H_ + +#include "nbl_resource.h" + +/* SPI Bank Index */ +#define BANKID_DESC_BANK 0xA0 +#define BANKID_BOOT_BANK 0xA1 +#define BANKID_SR_BANK0 0xA2 +#define BANKID_SR_BANK1 0xA3 +#define BANKID_OSI_BANK0 0xA4 +#define BANKID_OSI_BANK1 0xA5 +#define BANKID_FSI_BANK0 0xA6 +#define BANKID_FSI_BANK1 0xA7 +#define BANKID_HW_BANK 0xA8 +#define BANKID_NVM_BANK0 0xA9 +#define BANKID_NVM_BANK1 0xAA +#define BANKID_LOG_BANK 0xAB + +#define NBL_ADMINQ_IDX_LEN 4096 + +#define NBL_MAX_HW_I2C_RESP_SIZE 128 + +#define I2C_DEV_ADDR_A0 0x50 +#define I2C_DEV_ADDR_A2 0x51 + +/* SFF moudle register addresses: 8 bit valid */ +#define SFF_8472_IDENTIFIER 0x0 +#define SFF_8472_10GB_CAPABILITY 0x3 /* check sff-8472 table 5-3 */ +#define SFF_8472_1GB_CAPABILITY 0x6 /* check sff-8472 table 5-3 */ +#define SFF_8472_CABLE_TECHNOLOGY 0x8 /* check sff-8472 table 5-3 */ +#define SFF_8472_EXTENDED_CAPA 0x24 /* check sff-8024 table 4-4 */ +#define SFF_8472_CABLE_SPEC_COMP 0x3C +#define SFF_8472_DIAGNOSTIC \ + 0x5C /* digital diagnostic monitoring, relates to A2 */ +#define SFF_8472_COMPLIANCE 0x5E /* the specification revision version */ +#define SFF_8472_VENDOR_NAME 0x14 +#define SFF_8472_VENDOR_NAME_LEN \ + 16 /* 16 bytes, from offset 0x14 to offset 0x23 */ +#define SFF_8472_VENDOR_PN 0x28 +#define SFF_8472_VENDOR_PN_LEN 16 +#define SFF_8472_VENDOR_OUI 0x25 /* name and oui cannot all be empty */ +#define SFF_8472_VENDOR_OUI_LEN 3 +#define SFF_8472_SIGNALING_RATE 0xC +#define SFF_8472_SIGNALING_RATE_MAX 0x42 +#define SFF_8472_SIGNALING_RATE_MIN 0x43 +/* optional status/control bits: soft rate select and tx disable */ +#define SFF_8472_OSCB 0x6E +/* extended status/control bits */ +#define SFF_8472_ESCB 0x76 +#define SFF8636_DEVICE_TECH_OFFSET 0x93 + +#define SFF_8636_VENDOR_ENCODING 0x8B +#define SFF_8636_ENCODING_PAM4 0x8 + +/* SFF status code */ +#define SFF_IDENTIFIER_SFP 0x3 +#define SFF_IDENTIFIER_QSFP28 0x11 +#define SFF_IDENTIFIER_PAM4 0x1E +#define SFF_PASSIVE_CABLE 0x4 +#define SFF_ACTIVE_CABLE 0x8 +#define SFF_8472_ADDRESSING_MODE 0x4 +#define SFF_8472_UNSUPPORTED 0x00 +#define SFF_8472_10G_SR_BIT 4 /* 850nm, short reach */ +#define SFF_8472_10G_LR_BIT 5 /* 1310nm, long reach */ +#define SFF_8472_10G_LRM_BIT 6 /* 1310nm, long reach multimode */ +#define SFF_8472_10G_ER_BIT 7 /* 1550nm, extended reach */ +#define SFF_8472_1G_SX_BIT 0 +#define SFF_8472_1G_LX_BIT 1 +#define SFF_8472_1G_CX_BIT 2 +#define SFF_8472_1G_T_BIT 3 +#define SFF_8472_SOFT_TX_DISABLE 6 +#define SFF_8472_SOFT_RATE_SELECT 4 +#define SFF_8472_EMPTY_ASCII 20 +#define SFF_DDM_IMPLEMENTED 0x40 +#define SFF_COPPER_UNSPECIFIED 0 +#define SFF_COPPER_8431_APPENDIX_E 1 +#define SFF_COPPER_8431_LIMITING 4 +#define SFF_8636_TURNPAGE_ADDR (127) +#define SFF_8638_PAGESIZE (128U) +#define SFF_8638_PAGE0_SIZE (256U) + +#define SFF_8636_TEMP (0x60) +#define SFF_8636_TEMP_MAX (0x4) +#define SFF_8636_TEMP_CIRT (0x0) + +#define SFF_8636_QSFP28_TEMP (0x16) +#define SFF_8636_QSFP28_TEMP_MAX (0x204) +#define SFF_8636_QSFP28_TEMP_CIRT (0x200) + +#define SFF8636_TRANSMIT_FIBER_850nm_VCSEL (0x0) +#define SFF8636_TRANSMIT_FIBER_1310nm_VCSEL (0x1) +#define SFF8636_TRANSMIT_FIBER_1550nm_VCSEL (0x2) +#define SFF8636_TRANSMIT_FIBER_1310nm_FP (0x3) +#define SFF8636_TRANSMIT_FIBER_1310nm_DFB (0x4) +#define SFF8636_TRANSMIT_FIBER_1550nm_DFB (0x5) +#define SFF8636_TRANSMIT_FIBER_1310nm_EML (0x6) +#define SFF8636_TRANSMIT_FIBER_1550nm_EML (0x7) +#define SFF8636_TRANSMIT_FIBER_OTHER (0x8) +#define SFF8636_TRANSMIT_FIBER_1490nm_DFB (0x9) +#define SFF8636_TRANSMIT_COPPER_UNEQUA (0xa) +#define SFF8636_TRANSMIT_COPPER_PASSIVE_EQUALIZED (0xb) +#define SFF8636_TRANSMIT_COPPER_NEAR_FAR_END (0xc) +#define SFF8636_TRANSMIT_COPPER_FAR_END (0xd) +#define SFF8636_TRANSMIT_COPPER_NEAR_END (0xe) +#define SFF8636_TRANSMIT_COPPER_LINEAR_ACTIVE (0xf) + +#define NBL_ADMINQ_ETH_WOL_REG_OFFSET (0x1604000 + 0x500) + +/* VSI fixed number of queues*/ +#define NBL_VSI_PF_LEGAL_QUEUE_NUM(num) ((num) + NBL_DEFAULT_REP_HW_QUEUE_NUM) +#define NBL_VSI_PF_MAX_QUEUE_NUM(num) \ + (((num) * 2) + NBL_DEFAULT_REP_HW_QUEUE_NUM) +#define NBL_VSI_VF_REAL_QUEUE_NUM(num) (num) + +#define NBL_ADMINQ_RESID_FSI_SECTION_HBC (0x3000) +#define NBL_ADMINQ_RESID_FSI_TLV_SERIAL_NUMBER (0x3801) +#define NBL_ADMINQ_PFA_TLV_VF_NUM (0x5804) +#define NBL_ADMINQ_PFA_TLV_NET_RING_NUM (0x5805) + +struct nbl_port_key { + u32 id; /* port id */ + u32 subop; /* 1: read, 2: write */ + u64 data[]; /* [47:0]: data, [55:48]: rsvd, [63:56]: key */ +}; + +#define NBL_PORT_KEY_ILLEGAL 0x0 +#define NBL_PORT_KEY_CAPABILITIES 0x1 +#define NBL_PORT_KEY_ENABLE 0x2 /* BIT(0): NBL_PORT_FLAG_ENABLE_NOTIFY */ +#define NBL_PORT_KEY_DISABLE 0x3 +#define NBL_PORT_KEY_ADVERT 0x4 +#define NBL_PORT_KEY_LOOPBACK \ + 0x5 /* 0: disable eth loopback, 1: enable eth loopback */ +#define NBL_PORT_KEY_MODULE_SWITCH 0x6 /* 0: sfp off, 1: sfp on */ +#define NBL_PORT_KEY_MAC_ADDRESS 0x7 +#define NBL_PORT_KEY_LED_BLINK 0x8 +#define NBL_PORT_KEY_RESTORE_DEFAULTE_CFG 11 +#define NBL_PORT_KEY_SET_PFC_CFG 12 +#define NBL_PORT_KEY_GET_LINK_STATUS_OPCODE 17 + +enum { + NBL_PORT_SUBOP_READ = 1, + NBL_PORT_SUBOP_WRITE = 2, +}; + +#define NBL_PORT_FLAG_ENABLE_NOTIFY BIT(0) +#define NBL_PORT_ENABLE_LOOPBACK 1 +#define NBL_PORT_DISABLE_LOOPBCK 0 +#define NBL_PORT_SFP_ON 1 +#define NBL_PORT_SFP_OFF 0 +#define NBL_PORT_KEY_KEY_SHIFT 56 +#define NBL_PORT_KEY_DATA_MASK 0xFFFFFFFFFFFF + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c index bf7c95ea33da..57cae6baaafd 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c @@ -5,6 +5,19 @@ */ #include "nbl_hw_leonis.h" +static u32 nbl_hw_get_quirks(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = priv; + u32 quirks; + + nbl_hw_read_mbx_regs(hw_mgt, NBL_LEONIS_QUIRKS_OFFSET, (u8 *)&quirks, + sizeof(u32)); + + if (quirks == NBL_LEONIS_ILLEGAL_REG_VALUE) + return 0; + + return quirks; +} static void nbl_hw_update_mailbox_queue_tail_ptr(void *priv, u16 tail_ptr, u8 txrx) @@ -110,6 +123,71 @@ static u32 nbl_hw_get_host_pf_mask(void *priv) return data; } +static u32 nbl_hw_get_host_pf_fid(void *priv, u16 func_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + u32 data; + + nbl_hw_rd_regs(hw_mgt, NBL_PCIE_HOST_K_PF_FID(func_id), (u8 *)&data, + sizeof(data)); + return data; +} + +static u32 nbl_hw_get_real_bus(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + u32 data; + + data = nbl_hw_rd32(hw_mgt, NBL_PCIE_HOST_TL_CFG_BUSDEV); + return data >> 5; +} + +static u64 nbl_hw_get_pf_bar_addr(void *priv, u16 func_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + u64 addr; + u32 val; + u32 selector; + + selector = NBL_LB_PF_CONFIGSPACE_SELECT_OFFSET + + func_id * NBL_LB_PF_CONFIGSPACE_SELECT_STRIDE; + nbl_hw_wr32(hw_mgt, NBL_LB_PCIEX16_TOP_AHB, selector); + + val = nbl_hw_rd32(hw_mgt, + NBL_LB_PF_CONFIGSPACE_BASE_ADDR + PCI_BASE_ADDRESS_0); + addr = (u64)(val & PCI_BASE_ADDRESS_MEM_MASK); + + val = nbl_hw_rd32(hw_mgt, NBL_LB_PF_CONFIGSPACE_BASE_ADDR + + PCI_BASE_ADDRESS_0 + 4); + addr |= ((u64)val << 32); + + return addr; +} + +static u64 nbl_hw_get_vf_bar_addr(void *priv, u16 func_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + u64 addr; + u32 val; + u32 selector; + + selector = NBL_LB_PF_CONFIGSPACE_SELECT_OFFSET + + func_id * NBL_LB_PF_CONFIGSPACE_SELECT_STRIDE; + nbl_hw_wr32(hw_mgt, NBL_LB_PCIEX16_TOP_AHB, selector); + + val = nbl_hw_rd32(hw_mgt, NBL_LB_PF_CONFIGSPACE_BASE_ADDR + + NBL_SRIOV_CAPS_OFFSET + + PCI_SRIOV_BAR); + addr = (u64)(val & PCI_BASE_ADDRESS_MEM_MASK); + + val = nbl_hw_rd32(hw_mgt, NBL_LB_PF_CONFIGSPACE_BASE_ADDR + + NBL_SRIOV_CAPS_OFFSET + + PCI_SRIOV_BAR + 4); + addr |= ((u64)val << 32); + + return addr; +} + static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus, u16 devid, u16 function) { @@ -230,6 +308,234 @@ static bool nbl_hw_check_adminq_dma_err(void *priv, bool tx) return false; } +static u8 __iomem *nbl_hw_get_hw_addr(void *priv, size_t *size) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + if (size) + *size = (size_t)hw_mgt->hw_size; + return hw_mgt->hw_addr; +} + +static void nbl_hw_set_fw_ping(void *priv, u32 ping) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + nbl_hw_write_mbx_regs(hw_mgt, NBL_FW_HEARTBEAT_PING, (u8 *)&ping, + sizeof(ping)); +} + +static u32 nbl_hw_get_fw_pong(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + u32 pong; + + nbl_hw_rd_regs(hw_mgt, NBL_FW_HEARTBEAT_PONG, (u8 *)&pong, + sizeof(pong)); + + return pong; +} + +static void nbl_hw_set_fw_pong(void *priv, u32 pong) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + nbl_hw_wr_regs(hw_mgt, NBL_FW_HEARTBEAT_PONG, (u8 *)&pong, + sizeof(pong)); +} + +static int nbl_hw_process_abnormal_queue(struct nbl_hw_mgt *hw_mgt, + u16 queue_id, int type, + struct nbl_abnormal_details *detail) +{ + struct nbl_ipro_queue_tbl ipro_queue_tbl = { 0 }; + struct nbl_host_vnet_qinfo host_vnet_qinfo = { 0 }; + u32 qinfo_id = type == NBL_ABNORMAL_EVENT_DVN ? + NBL_PAIR_ID_GET_TX(queue_id) : + NBL_PAIR_ID_GET_RX(queue_id); + + if (type >= NBL_ABNORMAL_EVENT_MAX) + return -EINVAL; + + nbl_hw_rd_regs(hw_mgt, NBL_IPRO_QUEUE_TBL(queue_id), + (u8 *)&ipro_queue_tbl, sizeof(ipro_queue_tbl)); + + detail->abnormal = true; + detail->qid = queue_id; + detail->vsi_id = ipro_queue_tbl.vsi_id; + + nbl_hw_rd_regs(hw_mgt, NBL_PADPT_HOST_VNET_QINFO_REG_ARR(qinfo_id), + (u8 *)&host_vnet_qinfo, sizeof(host_vnet_qinfo)); + host_vnet_qinfo.valid = 1; + nbl_hw_wr_regs(hw_mgt, NBL_PADPT_HOST_VNET_QINFO_REG_ARR(qinfo_id), + (u8 *)&host_vnet_qinfo, sizeof(host_vnet_qinfo)); + + return 0; +} + +static int +nbl_hw_process_abnormal_event(void *priv, + struct nbl_abnormal_event_info *abnomal_info) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct device *dev = NBL_HW_MGT_TO_DEV(hw_mgt); + struct dvn_desc_dif_err_info desc_err_info = { 0 }; + struct dvn_pkt_dif_err_info pkt_dif_err_info = { 0 }; + struct dvn_err_queue_id_get err_queue_id_get = { 0 }; + struct uvn_queue_err_info queue_err_info = { 0 }; + struct nbl_abnormal_details *detail; + u32 int_status = 0, rdma_other_abn = 0, tlp_out_drop_cnt = 0; + u32 desc_dif_err_cnt = 0, pkt_dif_err_cnt = 0; + u32 queue_err_cnt; + int ret = 0; + + nbl_hw_rd_regs(hw_mgt, NBL_DVN_INT_STATUS, (u8 *)&int_status, + sizeof(u32)); + if (int_status == U32_MAX) + dev_info(dev, "dvn int_status:0x%x", int_status); + + if (int_status && int_status != U32_MAX) { + if (int_status & BIT(NBL_DVN_INT_DESC_DIF_ERR)) { + nbl_hw_rd_regs(hw_mgt, NBL_DVN_DESC_DIF_ERR_CNT, + (u8 *)&desc_dif_err_cnt, sizeof(u32)); + nbl_hw_rd_regs(hw_mgt, NBL_DVN_DESC_DIF_ERR_INFO, + (u8 *)&desc_err_info, + sizeof(struct dvn_desc_dif_err_info)); + dev_info(dev, "dvn int_status:0x%x, desc_dif_mf_cnt:%d, queue_id:%d\n", + int_status, desc_dif_err_cnt, + desc_err_info.queue_id); + detail = &abnomal_info->details[NBL_ABNORMAL_EVENT_DVN]; + nbl_hw_process_abnormal_queue(hw_mgt, + desc_err_info.queue_id, + NBL_ABNORMAL_EVENT_DVN, + detail); + + ret |= BIT(NBL_ABNORMAL_EVENT_DVN); + } + + if (int_status & BIT(NBL_DVN_INT_PKT_DIF_ERR)) { + nbl_hw_rd_regs(hw_mgt, NBL_DVN_PKT_DIF_ERR_CNT, + (u8 *)&pkt_dif_err_cnt, sizeof(u32)); + nbl_hw_rd_regs(hw_mgt, NBL_DVN_PKT_DIF_ERR_INFO, + (u8 *)&pkt_dif_err_info, + sizeof(struct dvn_pkt_dif_err_info)); + dev_info(dev, "dvn int_status:0x%x, pkt_dif_mf_cnt:%d, queue_id:%d\n", + int_status, pkt_dif_err_cnt, + pkt_dif_err_info.queue_id); + } + + /* clear dvn abnormal irq */ + nbl_hw_wr_regs(hw_mgt, NBL_DVN_INT_STATUS, (u8 *)&int_status, + sizeof(int_status)); + + /* enable new queue error irq */ + err_queue_id_get.desc_flag = 1; + err_queue_id_get.pkt_flag = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DVN_ERR_QUEUE_ID_GET, + (u8 *)&err_queue_id_get, + sizeof(err_queue_id_get)); + } + + int_status = 0; + nbl_hw_rd_regs(hw_mgt, NBL_UVN_INT_STATUS, (u8 *)&int_status, + sizeof(u32)); + if (int_status == U32_MAX) + dev_info(dev, "uvn int_status:0x%x", int_status); + if (int_status && int_status != U32_MAX) { + nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_ERR_CNT, + (u8 *)&queue_err_cnt, sizeof(u32)); + nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_ERR_INFO, + (u8 *)&queue_err_info, + sizeof(struct uvn_queue_err_info)); + dev_info(dev, + "uvn int_status:%x queue_err_cnt: 0x%x qid 0x%x\n", + int_status, queue_err_cnt, queue_err_info.queue_id); + + if (int_status & BIT(NBL_UVN_INT_QUEUE_ERR)) { + detail = &abnomal_info->details[NBL_ABNORMAL_EVENT_UVN]; + nbl_hw_process_abnormal_queue(hw_mgt, + queue_err_info.queue_id, + NBL_ABNORMAL_EVENT_UVN, + detail); + + ret |= BIT(NBL_ABNORMAL_EVENT_UVN); + } + + /* clear uvn abnormal irq */ + nbl_hw_wr_regs(hw_mgt, NBL_UVN_INT_STATUS, (u8 *)&int_status, + sizeof(int_status)); + } + + int_status = 0; + nbl_hw_rd_regs(hw_mgt, NBL_DSCH_INT_STATUS, (u8 *)&int_status, + sizeof(u32)); + nbl_hw_rd_regs(hw_mgt, NBL_DSCH_RDMA_OTHER_ABN, (u8 *)&rdma_other_abn, + sizeof(u32)); + if (int_status == U32_MAX) + dev_info(dev, "dsch int_status:0x%x", int_status); + if (int_status && int_status != U32_MAX && + (int_status != NBL_DSCH_RDMA_OTHER_ABN_BIT || + rdma_other_abn != NBL_DSCH_RDMA_DPQM_DB_LOST)) { + dev_info(dev, "dsch int_status:%x\n", int_status); + + /* clear dsch abnormal irq */ + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_INT_STATUS, (u8 *)&int_status, + sizeof(int_status)); + } + + int_status = 0; + nbl_hw_rd_regs(hw_mgt, NBL_PCOMPLETER_INT_STATUS, (u8 *)&int_status, + sizeof(u32)); + if (int_status == U32_MAX) + dev_info(dev, "pcomleter int_status:0x%x", int_status); + if (int_status && int_status != U32_MAX) { + nbl_hw_rd_regs(hw_mgt, NBL_PCOMPLETER_TLP_OUT_DROP_CNT, + (u8 *)&tlp_out_drop_cnt, sizeof(u32)); + dev_info(dev, + "pcomleter int_status:0x%x tlp_out_drop_cnt 0x%x\n", + int_status, tlp_out_drop_cnt); + + /* clear pcomleter abnormal irq */ + nbl_hw_wr_regs(hw_mgt, NBL_PCOMPLETER_INT_STATUS, + (u8 *)&int_status, sizeof(int_status)); + } + + return ret; +} + +static void nbl_hw_get_board_info(void *priv, + struct nbl_board_port_info *board_info) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + union nbl_fw_board_cfg_dw3 dw3 = { .info = { 0 } }; + + nbl_hw_read_mbx_regs(hw_mgt, NBL_FW_BOARD_DW3_OFFSET, (u8 *)&dw3, + sizeof(dw3)); + board_info->eth_num = dw3.info.port_num; + board_info->eth_speed = dw3.info.port_speed; + board_info->p4_version = dw3.info.p4_version; +} + +static u32 nbl_hw_get_fw_eth_num(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + union nbl_fw_board_cfg_dw3 dw3 = { .info = { 0 } }; + + nbl_hw_read_mbx_regs(hw_mgt, NBL_FW_BOARD_DW3_OFFSET, (u8 *)&dw3, + sizeof(dw3)); + return dw3.info.port_num; +} + +static u32 nbl_hw_get_fw_eth_map(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + union nbl_fw_board_cfg_dw6 dw6 = { .info = { 0 } }; + + nbl_hw_read_mbx_regs(hw_mgt, NBL_FW_BOARD_DW6_OFFSET, (u8 *)&dw6, + sizeof(dw6)); + return dw6.info.eth_bitmap; +} + static void nbl_hw_set_hw_status(void *priv, enum nbl_hw_status hw_status) { struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; @@ -253,6 +559,10 @@ static struct nbl_hw_ops hw_ops = { .get_mailbox_rx_tail_ptr = nbl_hw_get_mailbox_rx_tail_ptr, .check_mailbox_dma_err = nbl_hw_check_mailbox_dma_err, .get_host_pf_mask = nbl_hw_get_host_pf_mask, + .get_host_pf_fid = nbl_hw_get_host_pf_fid, + .get_real_bus = nbl_hw_get_real_bus, + .get_pf_bar_addr = nbl_hw_get_pf_bar_addr, + .get_vf_bar_addr = nbl_hw_get_vf_bar_addr, .cfg_mailbox_qinfo = nbl_hw_cfg_mailbox_qinfo, .config_adminq_rxq = nbl_hw_config_adminq_rxq, @@ -263,6 +573,15 @@ static struct nbl_hw_ops hw_ops = { .update_adminq_queue_tail_ptr = nbl_hw_update_adminq_queue_tail_ptr, .check_adminq_dma_err = nbl_hw_check_adminq_dma_err, + .get_hw_addr = nbl_hw_get_hw_addr, + .set_fw_ping = nbl_hw_set_fw_ping, + .get_fw_pong = nbl_hw_get_fw_pong, + .set_fw_pong = nbl_hw_set_fw_pong, + .process_abnormal_event = nbl_hw_process_abnormal_event, + .get_fw_eth_num = nbl_hw_get_fw_eth_num, + .get_fw_eth_map = nbl_hw_get_fw_eth_map, + .get_board_info = nbl_hw_get_board_info, + .get_quirks = nbl_hw_get_quirks, .set_hw_status = nbl_hw_set_hw_status, .get_hw_status = nbl_hw_get_hw_status, diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c new file mode 100644 index 000000000000..ea5c83b1ab76 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c @@ -0,0 +1,998 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +#include <linux/etherdevice.h> +#include <linux/if_vlan.h> + +#include "nbl_resource_leonis.h" +static int nbl_res_get_queue_num(void *priv, u16 func_id, u16 *tx_queue_num, + u16 *rx_queue_num); + +static void nbl_res_setup_common_ops(struct nbl_resource_mgt *res_mgt) +{ + res_mgt->common_ops.get_queue_num = nbl_res_get_queue_num; +} + +static int nbl_res_pf_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 pf_id) +{ + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + + if (pf_id >= NBL_MAX_PF) + return 0; + + return eth_info->eth_id[pf_id]; +} + +static u32 nbl_res_get_pfvf_queue_num(struct nbl_resource_mgt *res_mgt, + int pfid, int vfid) +{ + struct nbl_resource_info *res_info = NBL_RES_MGT_TO_RES_INFO(res_mgt); + struct nbl_net_ring_num_info *num_info = &res_info->net_ring_num_info; + u16 func_id = nbl_res_pfvfid_to_func_id(res_mgt, pfid, vfid); + u32 queue_num = 0; + + if (vfid >= 0) { + if (num_info->net_max_qp_num[func_id] != 0) + queue_num = num_info->net_max_qp_num[func_id]; + else + queue_num = num_info->vf_def_max_net_qp_num; + } else { + if (num_info->net_max_qp_num[func_id] != 0) + queue_num = num_info->net_max_qp_num[func_id]; + else + queue_num = num_info->pf_def_max_net_qp_num; + } + + if (queue_num > NBL_MAX_TXRX_QUEUE_PER_FUNC) { + nbl_warn(NBL_RES_MGT_TO_COMMON(res_mgt), + "Invalid queue num %u for func %d, use default", + queue_num, func_id); + queue_num = vfid >= 0 ? NBL_DEFAULT_VF_HW_QUEUE_NUM : + NBL_DEFAULT_PF_HW_QUEUE_NUM; + } + + return queue_num; +} + +static void nbl_res_get_rep_queue_info(void *priv, u16 *queue_num, + u16 *queue_size) +{ + *queue_size = NBL_DEFAULT_DESC_NUM; + *queue_num = NBL_DEFAULT_REP_HW_QUEUE_NUM; +} + +static int nbl_res_get_queue_num(void *priv, u16 func_id, u16 *tx_queue_num, + u16 *rx_queue_num) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)(priv); + int pfid, vfid; + + nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pfid, &vfid); + + *tx_queue_num = nbl_res_get_pfvf_queue_num(res_mgt, pfid, vfid); + *rx_queue_num = nbl_res_get_pfvf_queue_num(res_mgt, pfid, vfid); + + return 0; +} + +static int +nbl_res_save_vf_bar_info(struct nbl_resource_mgt *res_mgt, u16 func_id, + struct nbl_register_net_param *register_param) +{ + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_sriov_info *sriov_info = + &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt)[func_id]; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u64 pf_bar_start; + u64 vf_bar_start; + u16 pf_bdf; + u64 vf_bar_size; + u16 total_vfs; + u16 offset; + u16 stride; + + if (func_id < NBL_RES_MGT_TO_PF_NUM(res_mgt)) { + pf_bar_start = hw_ops->get_pf_bar_addr(p, func_id); + sriov_info->pf_bar_start = pf_bar_start; + dev_dbg(dev, "sriov_info, pf_bar_start:%llx\n", + sriov_info->pf_bar_start); + } + + pf_bdf = (u16)sriov_info->bdf; + vf_bar_size = register_param->vf_bar_size; + total_vfs = register_param->total_vfs; + offset = register_param->offset; + stride = register_param->stride; + + if (total_vfs) { + sriov_info->offset = offset; + sriov_info->stride = stride; + vf_bar_start = hw_ops->get_vf_bar_addr(p, func_id); + sriov_info->vf_bar_start = vf_bar_start; + sriov_info->vf_bar_len = vf_bar_size / total_vfs; + + dev_info(dev, + "sriov_info, bdf:%x:%x.%x, num_vfs:%d, start_vf_func_id:%d,", + PCI_BUS_NUM(pf_bdf), PCI_SLOT(pf_bdf & 0xff), + PCI_FUNC(pf_bdf & 0xff), sriov_info->num_vfs, + sriov_info->start_vf_func_id); + dev_info(dev, "offset:%d, stride:%d, vf_bar_start: %llx", + offset, stride, sriov_info->vf_bar_start); + } + + return 0; +} + +static int +nbl_res_prepare_vf_chan(struct nbl_resource_mgt *res_mgt, u16 func_id, + struct nbl_register_net_param *register_param) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_sriov_info *sriov_info = + &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt)[func_id]; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u16 total_vfs; + u16 offset; + u16 stride; + u8 pf_bus; + u8 pf_devfn; + u16 vf_id; + u8 bus; + u8 devfn; + u8 devid; + u8 function; + u16 vf_func_id; + + total_vfs = register_param->total_vfs; + offset = register_param->offset; + stride = register_param->stride; + + if (total_vfs) { + /* Configure mailbox qinfo_map_table for the pf's all vf, + * so vf's mailbox is ready, vf can use mailbox. + */ + pf_bus = PCI_BUS_NUM(sriov_info->bdf); + pf_devfn = sriov_info->bdf & 0xff; + for (vf_id = 0; vf_id < sriov_info->num_vfs; vf_id++) { + vf_func_id = sriov_info->start_vf_func_id + vf_id; + + bus = pf_bus + + ((pf_devfn + offset + stride * vf_id) >> 8); + devfn = (pf_devfn + offset + stride * vf_id) & 0xff; + devid = PCI_SLOT(devfn); + function = PCI_FUNC(devfn); + + hw_ops->cfg_mailbox_qinfo(p, vf_func_id, bus, devid, + function); + } + } + + return 0; +} + +static int nbl_res_update_active_vf_num(struct nbl_resource_mgt *res_mgt, + u16 func_id, bool add_flag) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_resource_info *resource_info = res_mgt->resource_info; + struct nbl_sriov_info *sriov_info = res_mgt->resource_info->sriov_info; + int pfid = 0; + int vfid = 0; + int ret; + + ret = nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pfid, &vfid); + if (ret) { + nbl_err(common, "convert func id to pfvfid failed\n"); + return ret; + } + + if (vfid == U32_MAX) + return 0; + + if (add_flag) { + if (!test_bit(func_id, resource_info->func_bitmap)) { + sriov_info[pfid].active_vf_num++; + set_bit(func_id, resource_info->func_bitmap); + } + } else if (sriov_info[pfid].active_vf_num) { + if (test_bit(func_id, resource_info->func_bitmap)) { + sriov_info[pfid].active_vf_num--; + clear_bit(func_id, resource_info->func_bitmap); + } + } + + return 0; +} + +static u32 nbl_res_get_quirks(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + return hw_ops->get_quirks(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); +} + +static int nbl_res_register_net(void *priv, u16 func_id, + struct nbl_register_net_param *register_param, + struct nbl_register_net_result *register_result) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_vsi_info *vsi_info = NBL_RES_MGT_TO_VSI_INFO(res_mgt); + netdev_features_t csumo_features = 0; + netdev_features_t tso_features = 0; + netdev_features_t pf_features = 0; + netdev_features_t vlano_features = 0; + u16 tx_queue_num, rx_queue_num; + u8 mac[ETH_ALEN] = {0}; + u32 quirks; + int ret = 0; + + if (func_id < NBL_MAX_PF) { + nbl_res_get_eth_mac(res_mgt, mac, + nbl_res_pf_to_eth_id(res_mgt, func_id)); + pf_features = NBL_FEATURE(NETIF_F_NTUPLE); + register_result->trusted = 1; + } else { + ether_addr_copy(mac, vsi_info->mac_info[func_id].mac); + register_result->trusted = vsi_info->mac_info[func_id].trusted; + } + ether_addr_copy(register_result->mac, mac); + + quirks = nbl_res_get_quirks(res_mgt); + if (!(quirks & BIT(NBL_QUIRKS_NO_TOE))) { + csumo_features = NBL_FEATURE(NETIF_F_RXCSUM) | + NBL_FEATURE(NETIF_F_IP_CSUM) | + NBL_FEATURE(NETIF_F_IPV6_CSUM); + tso_features = NBL_FEATURE(NETIF_F_TSO) | + NBL_FEATURE(NETIF_F_TSO6) | + NBL_FEATURE(NETIF_F_GSO_UDP_L4); + } + + if (func_id < NBL_MAX_PF) /* vf unsupport */ + vlano_features = NBL_FEATURE(NETIF_F_HW_VLAN_CTAG_TX) | + NBL_FEATURE(NETIF_F_HW_VLAN_CTAG_RX) | + NBL_FEATURE(NETIF_F_HW_VLAN_STAG_TX) | + NBL_FEATURE(NETIF_F_HW_VLAN_STAG_RX); + + register_result->hw_features |= + pf_features | csumo_features | tso_features | vlano_features | + NBL_FEATURE(NETIF_F_SG) | NBL_FEATURE(NETIF_F_RXHASH); + + register_result->features |= register_result->hw_features | + NBL_FEATURE(NETIF_F_HW_VLAN_CTAG_FILTER) | + NBL_FEATURE(NETIF_F_HW_VLAN_STAG_FILTER); + + register_result->vlan_features = register_result->features; + + register_result->max_mtu = NBL_MAX_JUMBO_FRAME_SIZE - NBL_PKT_HDR_PAD; + + register_result->vlan_proto = vsi_info->mac_info[func_id].vlan_proto; + register_result->vlan_tci = vsi_info->mac_info[func_id].vlan_tci; + register_result->rate = vsi_info->mac_info[func_id].rate; + + nbl_res_get_queue_num(res_mgt, func_id, &tx_queue_num, &rx_queue_num); + register_result->tx_queue_num = tx_queue_num; + register_result->rx_queue_num = rx_queue_num; + register_result->queue_size = NBL_DEFAULT_DESC_NUM; + + ret = nbl_res_update_active_vf_num(res_mgt, func_id, 1); + if (ret) { + nbl_err(common, "change active vf num failed with ret: %d\n", + ret); + goto update_active_vf_fail; + } + + if (func_id >= NBL_RES_MGT_TO_PF_NUM(res_mgt)) + return 0; + + ret = nbl_res_save_vf_bar_info(res_mgt, func_id, register_param); + if (ret) + goto save_vf_bar_info_fail; + + ret = nbl_res_prepare_vf_chan(res_mgt, func_id, register_param); + if (ret) + goto prepare_vf_chan_fail; + + nbl_res_open_sfp(res_mgt, nbl_res_pf_to_eth_id(res_mgt, func_id)); + + return ret; + +prepare_vf_chan_fail: +save_vf_bar_info_fail: +update_active_vf_fail: + return ret; +} + +static int nbl_res_unregister_net(void *priv, u16 func_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + + return nbl_res_update_active_vf_num(res_mgt, func_id, 0); +} + +static u16 nbl_res_get_vsi_id(void *priv, u16 func_id, u16 type) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + + return nbl_res_func_id_to_vsi_id(res_mgt, func_id, type); +} + +static void nbl_res_get_eth_id(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id, + u8 *logic_eth_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + u16 pf_id = nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id); + + *eth_mode = eth_info->eth_num; + if (pf_id < eth_info->eth_num) { + *eth_id = eth_info->eth_id[pf_id]; + *logic_eth_id = pf_id; + /* if pf_id > eth_num, use eth_id 0 */ + } else { + *eth_id = eth_info->eth_id[0]; + *logic_eth_id = 0; + } +} + +static u8 __iomem *nbl_res_get_hw_addr(void *priv, size_t *size) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + return hw_ops->get_hw_addr(NBL_RES_MGT_TO_HW_PRIV(res_mgt), size); +} + +static u16 nbl_res_get_function_id(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + + return nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); +} + +static void nbl_res_get_real_bdf(void *priv, u16 vsi_id, u8 *bus, u8 *dev, + u8 *function) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + + nbl_res_func_id_to_bdf(res_mgt, func_id, bus, dev, function); +} + +static int +nbl_res_process_abnormal_event(void *priv, + struct nbl_abnormal_event_info *abnomal_info) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + return hw_ops->process_abnormal_event(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + abnomal_info); +} + +static void nbl_res_flr_clear_net(void *priv, u16 vf_id) +{ + u16 func_id = vf_id + NBL_MAX_PF; + + if (nbl_res_vf_is_active(priv, func_id)) + nbl_res_unregister_net(priv, func_id); +} + +static u16 nbl_res_covert_vfid_to_vsi_id(void *priv, u16 vf_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = vf_id + NBL_MAX_PF; + + return nbl_res_func_id_to_vsi_id(res_mgt, func_id, + NBL_VSI_SERV_VF_DATA_TYPE); +} + +static bool nbl_res_check_vf_is_active(void *priv, u16 func_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + + return nbl_res_vf_is_active(res_mgt, func_id); +} + +static int +nbl_res_get_ustore_total_pkt_drop_stats(void *priv, u8 eth_id, + struct nbl_ustore_stats *stat) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_ustore_stats *ustore_stats = + NBL_RES_MGT_TO_USTORE_STATS(res_mgt); + + stat->rx_drop_packets = + ustore_stats[eth_id].rx_drop_packets; + stat->rx_trun_packets = + ustore_stats[eth_id].rx_trun_packets; + return 0; +} + +static int nbl_res_get_board_id(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + + return NBL_COMMON_TO_BOARD_ID(common); +} + +static void nbl_res_register_dev_name(void *priv, u16 vsi_id, char *name) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_resource_info *resource_info = + NBL_RES_MGT_TO_RES_INFO(res_mgt); + u32 pf_id; + + pf_id = nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id); + WARN_ON(pf_id >= NBL_MAX_PF); + strscpy(resource_info->pf_name_list[pf_id], name, IFNAMSIZ); + nbl_info(NBL_RES_MGT_TO_COMMON(res_mgt), + "vsi:%u-pf:%u register a pf_name->%s", vsi_id, pf_id, name); +} + +static void nbl_res_get_dev_name(void *priv, u16 vsi_id, char *name) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_resource_info *resource_info = + NBL_RES_MGT_TO_RES_INFO(res_mgt); + int pf_id, vf_id; + u16 func_id; + int name_len; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pf_id, &vf_id); + WARN_ON(pf_id >= NBL_MAX_PF); + name_len = snprintf(name, IFNAMSIZ, "%sv%d", + resource_info->pf_name_list[pf_id], vf_id); + if (name_len >= IFNAMSIZ) + nbl_err(NBL_RES_MGT_TO_COMMON(res_mgt), + "vsi:%u-pf%uvf%u get name over length", vsi_id, pf_id, + vf_id); + + nbl_debug(NBL_RES_MGT_TO_COMMON(res_mgt), + "vsi:%u-pf%uvf%u get a pf_name->%s", vsi_id, pf_id, vf_id, + name); +} + +static struct nbl_resource_ops res_ops = { + .register_net = nbl_res_register_net, + .unregister_net = nbl_res_unregister_net, + .get_vsi_id = nbl_res_get_vsi_id, + .get_eth_id = nbl_res_get_eth_id, + + .get_rep_queue_info = nbl_res_get_rep_queue_info, + + .get_hw_addr = nbl_res_get_hw_addr, + .get_function_id = nbl_res_get_function_id, + .get_real_bdf = nbl_res_get_real_bdf, + .get_product_fix_cap = nbl_res_get_fix_capability, + + .flr_clear_net = nbl_res_flr_clear_net, + .covert_vfid_to_vsi_id = nbl_res_covert_vfid_to_vsi_id, + .check_vf_is_active = nbl_res_check_vf_is_active, + + .get_ustore_total_pkt_drop_stats = + nbl_res_get_ustore_total_pkt_drop_stats, + + .process_abnormal_event = nbl_res_process_abnormal_event, + + .get_board_id = nbl_res_get_board_id, + + .set_hw_status = nbl_res_set_hw_status, + .register_dev_name = nbl_res_register_dev_name, + .get_dev_name = nbl_res_get_dev_name, +}; + +static struct nbl_res_product_ops product_ops = { +}; + +static int +nbl_res_setup_res_mgt(struct nbl_common_info *common, + struct nbl_resource_mgt_leonis **res_mgt_leonis) +{ + struct device *dev; + struct nbl_resource_info *resource_info; + + dev = NBL_COMMON_TO_DEV(common); + *res_mgt_leonis = devm_kzalloc(dev, + sizeof(struct nbl_resource_mgt_leonis), + GFP_KERNEL); + if (!*res_mgt_leonis) + return -ENOMEM; + NBL_RES_MGT_TO_COMMON(&(*res_mgt_leonis)->res_mgt) = common; + + resource_info = + devm_kzalloc(dev, sizeof(struct nbl_resource_info), GFP_KERNEL); + if (!resource_info) + return -ENOMEM; + NBL_RES_MGT_TO_RES_INFO(&(*res_mgt_leonis)->res_mgt) = resource_info; + + return 0; +} + +static void +nbl_res_remove_res_mgt(struct nbl_common_info *common, + struct nbl_resource_mgt_leonis **res_mgt_leonis) +{ + struct device *dev; + + dev = NBL_COMMON_TO_DEV(common); + devm_kfree(dev, NBL_RES_MGT_TO_RES_INFO(&(*res_mgt_leonis)->res_mgt)); + devm_kfree(dev, *res_mgt_leonis); + *res_mgt_leonis = NULL; +} + +static void nbl_res_remove_ops(struct device *dev, + struct nbl_resource_ops_tbl **res_ops_tbl) +{ + devm_kfree(dev, *res_ops_tbl); + *res_ops_tbl = NULL; +} + +static int nbl_res_setup_ops(struct device *dev, + struct nbl_resource_ops_tbl **res_ops_tbl, + struct nbl_resource_mgt_leonis *res_mgt_leonis) +{ + *res_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_resource_ops_tbl), + GFP_KERNEL); + if (!*res_ops_tbl) + return -ENOMEM; + + (*res_ops_tbl)->ops = &res_ops; + (*res_ops_tbl)->priv = res_mgt_leonis; + + return 0; +} + +static int nbl_res_ctrl_dev_setup_eth_info(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_eth_info *eth_info; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u32 eth_num = 0; + u32 eth_bitmap, eth_id; + int i; + + eth_info = devm_kzalloc(dev, sizeof(struct nbl_eth_info), GFP_KERNEL); + if (!eth_info) + return -ENOMEM; + + res_mgt->resource_info->eth_info = eth_info; + + eth_info->eth_num = + (u8)hw_ops->get_fw_eth_num(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + eth_bitmap = hw_ops->get_fw_eth_map(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + /* for 2 eth port board, the eth_id is 0, 2 */ + for (i = 0; i < NBL_MAX_ETHERNET; i++) { + if ((1 << i) & eth_bitmap) { + set_bit(i, eth_info->eth_bitmap); + eth_info->eth_id[eth_num] = i; + eth_info->logic_eth_id[i] = eth_num; + eth_num++; + } + } + + for (i = 0; i < NBL_RES_MGT_TO_PF_NUM(res_mgt); i++) { + /* if pf_id <= eth_num, the pf relate corresponding eth_id*/ + if (i < eth_num) { + eth_id = eth_info->eth_id[i]; + eth_info->pf_bitmap[eth_id] |= BIT(i); + } + /* if pf_id > eth_num, the pf relate eth 0*/ + else + eth_info->pf_bitmap[0] |= BIT(i); + } + + return 0; +} + +static void nbl_res_ctrl_dev_remove_eth_info(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_eth_info **eth_info = &NBL_RES_MGT_TO_ETH_INFO(res_mgt); + + if (*eth_info) { + devm_kfree(dev, *eth_info); + *eth_info = NULL; + } +} + +static int nbl_res_ctrl_dev_sriov_info_init(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_resource_info *res_info = NBL_RES_MGT_TO_RES_INFO(res_mgt); + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(common); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_sriov_info *sriov_info; + u32 vf_fid, vf_startid, vf_endid = NBL_MAX_VF; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u16 func_id; + u16 function; + + sriov_info = devm_kcalloc(dev, NBL_RES_MGT_TO_PF_NUM(res_mgt), + sizeof(struct nbl_sriov_info), GFP_KERNEL); + if (!sriov_info) + return -ENOMEM; + + res_mgt->resource_info->sriov_info = sriov_info; + + for (func_id = 0; func_id < NBL_RES_MGT_TO_PF_NUM(res_mgt); func_id++) { + sriov_info = &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt)[func_id]; + function = NBL_COMMON_TO_PCI_FUNC_ID(common) + func_id; + + common->hw_bus = (u8)hw_ops->get_real_bus(p); + sriov_info->bdf = PCI_DEVID(common->hw_bus, + PCI_DEVFN(common->devid, function)); + vf_fid = hw_ops->get_host_pf_fid(p, func_id); + vf_startid = vf_fid & 0xFFFF; + vf_endid = (vf_fid >> 16) & 0xFFFF; + sriov_info->start_vf_func_id = vf_startid + NBL_MAX_PF_LEONIS; + sriov_info->num_vfs = vf_endid - vf_startid; + } + + res_info->max_vf_num = vf_endid; + + return 0; +} + +static void nbl_res_ctrl_dev_sriov_info_remove(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_sriov_info **sriov_info = + &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt); + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + + if (!(*sriov_info)) + return; + + devm_kfree(dev, *sriov_info); + *sriov_info = NULL; +} + +static int nbl_res_ctrl_dev_vsi_info_init(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(common); + struct nbl_vsi_info *vsi_info; + struct nbl_sriov_info *sriov_info; + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + int i; + + vsi_info = devm_kcalloc(dev, NBL_RES_MGT_TO_PF_NUM(res_mgt), + sizeof(struct nbl_vsi_info), GFP_KERNEL); + if (!vsi_info) + return -ENOMEM; + + res_mgt->resource_info->vsi_info = vsi_info; + /* + * case 1 two port(2pf) + * pf0,pf1(NBL_VSI_SERV_PF_DATA_TYPE) vsi is 0,512 + * pf0,pf1(NBL_VSI_SERV_PF_CTLR_TYPE) vsi is 1,513 + * pf0,pf1(NBL_VSI_SERV_PF_USER_TYPE) vsi is 2,514 + * pf0,pf1(NBL_VSI_SERV_PF_XDP_TYPE) vsi is 3,515 + * pf0.vf0-pf0.vf255(NBL_VSI_SERV_VF_DATA_TYPE) vsi is 4-259 + * pf1.vf0-pf1.vf255(NBL_VSI_SERV_VF_DATA_TYPE) vsi is 516-771 + * pf2-pf7(NBL_VSI_SERV_PF_EXTRA_TYPE) vsi 260-265(if exist) + * case 2 four port(4pf) + * pf0,pf1,pf2,pf3(NBL_VSI_SERV_PF_DATA_TYPE) vsi is 0,256,512,768 + * pf0,pf1,pf2,pf3(NBL_VSI_SERV_PF_CTLR_TYPE) vsi is 1,257,513,769 + * pf0,pf1,pf2,pf3(NBL_VSI_SERV_PF_USER_TYPE) vsi is 2,258,514,770 + * pf0,pf1,pf2,pf3(NBL_VSI_SERV_PF_XDP_TYPE) vsi is 3,259,515,771 + * pf0.vf0-pf0.vf127(NBL_VSI_SERV_VF_DATA_TYPE) vsi is 4-131 + * pf1.vf0-pf1.vf127(NBL_VSI_SERV_VF_DATA_TYPE) vsi is 260-387 + * pf2.vf0-pf2.vf127(NBL_VSI_SERV_VF_DATA_TYPE) vsi is 516-643 + * pf3.vf0-pf3.vf127(NBL_VSI_SERV_VF_DATA_TYPE) vsi is 772-899 + * pf4-pf7(NBL_VSI_SERV_PF_EXTRA_TYPE) vsi 132-135(if exist) + */ + + vsi_info->num = eth_info->eth_num; + for (i = 0; i < vsi_info->num; i++) { + vsi_info->serv_info[i][NBL_VSI_SERV_PF_DATA_TYPE].base_id = + i * NBL_VSI_ID_GAP(vsi_info->num); + vsi_info->serv_info[i][NBL_VSI_SERV_PF_DATA_TYPE].num = 1; + vsi_info->serv_info[i][NBL_VSI_SERV_PF_CTLR_TYPE].base_id = + vsi_info->serv_info[i][NBL_VSI_SERV_PF_DATA_TYPE] + .base_id + + vsi_info->serv_info[i][NBL_VSI_SERV_PF_DATA_TYPE].num; + vsi_info->serv_info[i][NBL_VSI_SERV_PF_CTLR_TYPE].num = 1; + vsi_info->serv_info[i][NBL_VSI_SERV_PF_USER_TYPE].base_id = + vsi_info->serv_info[i][NBL_VSI_SERV_PF_CTLR_TYPE] + .base_id + + vsi_info->serv_info[i][NBL_VSI_SERV_PF_CTLR_TYPE].num; + vsi_info->serv_info[i][NBL_VSI_SERV_PF_USER_TYPE].num = 1; + vsi_info->serv_info[i][NBL_VSI_SERV_PF_XDP_TYPE].base_id = + vsi_info->serv_info[i][NBL_VSI_SERV_PF_USER_TYPE] + .base_id + + vsi_info->serv_info[i][NBL_VSI_SERV_PF_USER_TYPE].num; + vsi_info->serv_info[i][NBL_VSI_SERV_PF_XDP_TYPE].num = 1; + vsi_info->serv_info[i][NBL_VSI_SERV_VF_DATA_TYPE].base_id = + vsi_info->serv_info[i][NBL_VSI_SERV_PF_XDP_TYPE] + .base_id + + vsi_info->serv_info[i][NBL_VSI_SERV_PF_XDP_TYPE].num; + sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + i; + vsi_info->serv_info[i][NBL_VSI_SERV_VF_DATA_TYPE].num = + sriov_info->num_vfs; + } + + /* pf_id >= eth_num, it belong pf0's switch */ + vsi_info->serv_info[0][NBL_VSI_SERV_PF_EXTRA_TYPE].base_id = + vsi_info->serv_info[0][NBL_VSI_SERV_VF_DATA_TYPE].base_id + + vsi_info->serv_info[0][NBL_VSI_SERV_VF_DATA_TYPE].num; + vsi_info->serv_info[0][NBL_VSI_SERV_PF_EXTRA_TYPE].num = + NBL_RES_MGT_TO_PF_NUM(res_mgt) - vsi_info->num; + + return 0; +} + +static void nbl_res_ctrl_dev_remove_vsi_info(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_vsi_info **vsi_info = &NBL_RES_MGT_TO_VSI_INFO(res_mgt); + + if (!(*vsi_info)) + return; + + devm_kfree(dev, *vsi_info); + *vsi_info = NULL; +} + +static int nbl_res_ring_num_info_init(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_resource_info *resource_info = + NBL_RES_MGT_TO_RES_INFO(res_mgt); + struct nbl_net_ring_num_info *num_info = + &resource_info->net_ring_num_info; + + num_info->pf_def_max_net_qp_num = NBL_DEFAULT_PF_HW_QUEUE_NUM; + num_info->vf_def_max_net_qp_num = NBL_DEFAULT_VF_HW_QUEUE_NUM; + + return 0; +} + +static int nbl_res_ctrl_dev_ustore_stats_init(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(common); + struct nbl_ustore_stats *ustore_stats; + + ustore_stats = devm_kcalloc(dev, NBL_MAX_ETHERNET, + sizeof(struct nbl_ustore_stats), + GFP_KERNEL); + if (!ustore_stats) + return -ENOMEM; + + res_mgt->resource_info->ustore_stats = ustore_stats; + + return 0; +} + +static void +nbl_res_ctrl_dev_ustore_stats_remove(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_ustore_stats **ustore_stats = + &NBL_RES_MGT_TO_USTORE_STATS(res_mgt); + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + + if (!(*ustore_stats)) + return; + + devm_kfree(dev, *ustore_stats); + *ustore_stats = NULL; +} + +static int nbl_res_check_fw_working(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + unsigned long fw_pong_current; + unsigned long seconds_current = 0; + unsigned long timeout_us = 500 * USEC_PER_MSEC; + unsigned long sleep_us = USEC_PER_MSEC; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + ktime_t timeout = ktime_add_us(ktime_get(), timeout_us); + bool sleep_before_read = false; + + seconds_current = (unsigned long)ktime_get_real_seconds(); + hw_ops->set_fw_pong(p, seconds_current - 1); + hw_ops->set_fw_ping(p, seconds_current); + + might_sleep_if(sleep_us != 0); + if (sleep_before_read && sleep_us) + usleep_range((sleep_us >> 2) + 1, sleep_us); + for (;;) { + fw_pong_current = + hw_ops->get_fw_pong(p); + if (fw_pong_current == seconds_current) + break; + if (timeout_us && ktime_compare(ktime_get(), timeout) > 0) { + fw_pong_current = hw_ops->get_fw_pong(p); + break; + } + if (sleep_us) + usleep_range((sleep_us >> 2) + 1, sleep_us); + } + return (fw_pong_current == seconds_current) ? 0 : -ETIMEDOUT; +} + +static int nbl_res_init_pf_num(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u32 pf_mask; + u32 pf_num = 0; + int i; + + pf_mask = hw_ops->get_host_pf_mask(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + for (i = 0; i < NBL_MAX_PF_LEONIS; i++) { + if (!(pf_mask & (1 << i))) + pf_num++; + else + break; + } + + res_mgt->resource_info->max_pf = pf_num; + + if (!pf_num) + return -1; + + return 0; +} + +static void nbl_res_init_board_info(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + hw_ops->get_board_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + &res_mgt->resource_info->board_info); +} + +static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis) +{ + struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt; + + nbl_res_ctrl_dev_ustore_stats_remove(res_mgt); + nbl_res_ctrl_dev_remove_vsi_info(res_mgt); + nbl_res_ctrl_dev_remove_eth_info(res_mgt); + nbl_res_ctrl_dev_sriov_info_remove(res_mgt); +} + +static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis, + struct nbl_func_caps caps) +{ + struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + + u32 quirks; + int ret = 0; + + if (caps.has_ctrl) { + ret = nbl_res_check_fw_working(res_mgt); + if (ret) { + nbl_err(common, "fw is not working"); + return ret; + } + + nbl_res_init_board_info(res_mgt); + + ret = nbl_res_init_pf_num(res_mgt); + if (ret) { + nbl_err(common, "pf number is illegal"); + return ret; + } + + ret = nbl_res_ctrl_dev_sriov_info_init(res_mgt); + if (ret) { + nbl_err(common, "Failed to init sr_iov info"); + return ret; + } + + ret = nbl_res_ctrl_dev_setup_eth_info(res_mgt); + if (ret) + goto start_fail; + + ret = nbl_res_ctrl_dev_vsi_info_init(res_mgt); + if (ret) + goto start_fail; + + ret = nbl_res_ring_num_info_init(res_mgt); + if (ret) + goto start_fail; + + ret = nbl_res_ctrl_dev_ustore_stats_init(res_mgt); + if (ret) + goto start_fail; + + nbl_res_set_fix_capability(res_mgt, NBL_TASK_FW_HB_CAP); + nbl_res_set_fix_capability(res_mgt, NBL_TASK_FW_RESET_CAP); + nbl_res_set_fix_capability(res_mgt, NBL_TASK_CLEAN_ADMINDQ_CAP); + nbl_res_set_fix_capability(res_mgt, NBL_RESTOOL_CAP); + nbl_res_set_fix_capability(res_mgt, NBL_TASK_ADAPT_DESC_GOTHER); + nbl_res_set_fix_capability(res_mgt, NBL_PROCESS_FLR_CAP); + nbl_res_set_fix_capability(res_mgt, NBL_TASK_RESET_CTRL_CAP); + nbl_res_set_fix_capability(res_mgt, NBL_NEED_DESTROY_CHIP); + } + + nbl_res_set_fix_capability(res_mgt, NBL_TASK_CLEAN_MAILBOX_CAP); + + nbl_res_set_fix_capability(res_mgt, NBL_TASK_RESET_CAP); + + quirks = nbl_res_get_quirks(res_mgt); + if (quirks & BIT(NBL_QUIRKS_NO_TOE)) { + nbl_res_set_fix_capability(res_mgt, NBL_TASK_KEEP_ALIVE); + if (caps.has_ctrl) + nbl_res_set_fix_capability(res_mgt, + NBL_RECOVERY_ABN_STATUS); + } + + return 0; + +start_fail: + nbl_res_stop(res_mgt_leonis); + return ret; +} + +int nbl_res_init_leonis(void *p, struct nbl_init_param *param) +{ + struct nbl_adapter *adap = (struct nbl_adapter *)p; + struct device *dev; + struct nbl_common_info *common; + struct nbl_resource_mgt_leonis **mgt; + struct nbl_resource_ops_tbl **res_ops_tbl; + struct nbl_hw_ops_tbl *hw_ops_tbl; + struct nbl_channel_ops_tbl *chan_ops_tbl; + int ret = 0; + + dev = NBL_ADAP_TO_DEV(adap); + common = NBL_ADAP_TO_COMMON(adap); + mgt = + (struct nbl_resource_mgt_leonis **)&NBL_ADAP_TO_RES_MGT(adap); + res_ops_tbl = &NBL_ADAP_TO_RES_OPS_TBL(adap); + hw_ops_tbl = NBL_ADAP_TO_HW_OPS_TBL(adap); + chan_ops_tbl = NBL_ADAP_TO_CHAN_OPS_TBL(adap); + + ret = nbl_res_setup_res_mgt(common, mgt); + if (ret) + goto setup_mgt_fail; + + nbl_res_setup_common_ops(&(*mgt)->res_mgt); + (&(*mgt)->res_mgt)->chan_ops_tbl = chan_ops_tbl; + (&(*mgt)->res_mgt)->hw_ops_tbl = hw_ops_tbl; + + (&(*mgt)->res_mgt)->product_ops = &product_ops; + + ret = nbl_res_start(*mgt, param->caps); + if (ret) + goto start_fail; + + ret = nbl_res_setup_ops(dev, res_ops_tbl, *mgt); + if (ret) + goto setup_ops_fail; + + return 0; + +setup_ops_fail: + nbl_res_stop(*mgt); +start_fail: + nbl_res_remove_res_mgt(common, mgt); +setup_mgt_fail: + return ret; +} + +void nbl_res_remove_leonis(void *p) +{ + struct nbl_adapter *adap = (struct nbl_adapter *)p; + struct device *dev; + struct nbl_common_info *common; + struct nbl_resource_mgt_leonis **mgt; + struct nbl_resource_ops_tbl **res_ops_tbl; + + dev = NBL_ADAP_TO_DEV(adap); + common = NBL_ADAP_TO_COMMON(adap); + mgt = (struct nbl_resource_mgt_leonis **)&NBL_ADAP_TO_RES_MGT(adap); + res_ops_tbl = &NBL_ADAP_TO_RES_OPS_TBL(adap); + + nbl_res_remove_ops(dev, res_ops_tbl); + nbl_res_stop(*mgt); + nbl_res_remove_res_mgt(common, mgt); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h new file mode 100644 index 000000000000..a0a25a2b71ee --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_RESOURCE_LEONIS_H_ +#define _NBL_RESOURCE_LEONIS_H_ + +#include "nbl_resource.h" + +#define NBL_MAX_PF_LEONIS 8 +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c new file mode 100644 index 000000000000..22205e055100 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c @@ -0,0 +1,427 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include "nbl_resource.h" + +static u16 pfvfid_to_vsi_id(void *p, int pfid, int vfid, u16 type) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_vsi_info *vsi_info = NBL_RES_MGT_TO_VSI_INFO(res_mgt); + enum nbl_vsi_serv_type dst_type = NBL_VSI_SERV_PF_DATA_TYPE; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + u16 vsi_id = U16_MAX; + int diff; + + diff = nbl_common_pf_id_subtraction_mgtpf_id(common, pfid); + if (vfid == U32_MAX || vfid == U16_MAX) { + if (diff < vsi_info->num) { + nbl_res_pf_dev_vsi_type_to_hw_vsi_type(type, &dst_type); + vsi_id = vsi_info->serv_info[diff][dst_type].base_id; + } else { + vsi_id = vsi_info->serv_info[0] + [NBL_VSI_SERV_PF_EXTRA_TYPE] + .base_id + + (diff - vsi_info->num); + } + } else { + vsi_id = vsi_info->serv_info[diff][NBL_VSI_SERV_VF_DATA_TYPE] + .base_id + + vfid; + } + + if (vsi_id == U16_MAX) + pr_err("convert pfid-vfid %d-%d to vsi_id(%d) failed!\n", pfid, + vfid, type); + + return vsi_id; +} + +static u16 func_id_to_vsi_id(void *p, u16 func_id, u16 type) +{ + int pfid = U32_MAX; + int vfid = U32_MAX; + + nbl_res_func_id_to_pfvfid(p, func_id, &pfid, &vfid); + return nbl_res_pfvfid_to_vsi_id(p, pfid, vfid, type); +} + +static u16 vsi_id_to_func_id(void *p, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_vsi_info *vsi_info = NBL_RES_MGT_TO_VSI_INFO(res_mgt); + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_sriov_info *sriov_info; + int i, j; + u16 func_id = U16_MAX; + bool vsi_find = false; + + for (i = 0; i < vsi_info->num; i++) { + for (j = 0; j < NBL_VSI_SERV_MAX_TYPE; j++) { + if (vsi_id >= vsi_info->serv_info[i][j].base_id && + (vsi_id < vsi_info->serv_info[i][j].base_id + + vsi_info->serv_info[i][j].num)) { + vsi_find = true; + break; + } + } + + if (vsi_find) + break; + } + + if (vsi_find) { + /* if pf_id < eth_num */ + if (j >= NBL_VSI_SERV_PF_DATA_TYPE && + j <= NBL_VSI_SERV_PF_USER_TYPE) + func_id = i + NBL_COMMON_TO_MGT_PF(common); + /* if vf */ + else if (j == NBL_VSI_SERV_VF_DATA_TYPE) { + sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + i; + func_id = + sriov_info->start_vf_func_id + + (vsi_id - + vsi_info->serv_info[i] + [NBL_VSI_SERV_VF_DATA_TYPE] + .base_id); + /* if extra pf */ + } else { + func_id = + vsi_info->num + + (vsi_id - + vsi_info->serv_info[i] + [NBL_VSI_SERV_PF_EXTRA_TYPE] + .base_id); + } + } + + if (func_id == U16_MAX) + pr_err("convert vsi_id %d to func_id failed!\n", vsi_id); + + return func_id; +} + +static int vsi_id_to_pf_id(void *p, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_vsi_info *vsi_info = NBL_RES_MGT_TO_VSI_INFO(res_mgt); + u32 pf_id = U32_MAX; + bool vsi_find = false; + int i, j; + + for (i = 0; i < vsi_info->num; i++) { + for (j = 0; j < NBL_VSI_SERV_MAX_TYPE; j++) + if (vsi_id >= vsi_info->serv_info[i][j].base_id && + (vsi_id < vsi_info->serv_info[i][j].base_id + + vsi_info->serv_info[i][j].num)) { + vsi_find = true; + break; + } + + if (vsi_find) + break; + } + + if (vsi_find) { + /* if pf_id < eth_num */ + if (j >= NBL_VSI_SERV_PF_DATA_TYPE && + j <= NBL_VSI_SERV_VF_DATA_TYPE) + pf_id = i + NBL_COMMON_TO_MGT_PF(common); + /* if extra pf */ + else if (j == NBL_VSI_SERV_PF_EXTRA_TYPE) + pf_id = vsi_info->num + + (vsi_id - + vsi_info->serv_info[i] + [NBL_VSI_SERV_PF_EXTRA_TYPE] + .base_id); + } + + return pf_id; +} + +static int func_id_to_pfvfid(void *p, u16 func_id, int *pfid, int *vfid) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_sriov_info *sriov_info; + int pf_id_tmp; + int diff; + + if (func_id < NBL_RES_MGT_TO_PF_NUM(res_mgt)) { + *pfid = func_id; + *vfid = U32_MAX; + return 0; + } + + for (pf_id_tmp = 0; pf_id_tmp < NBL_RES_MGT_TO_PF_NUM(res_mgt); + pf_id_tmp++) { + diff = nbl_common_pf_id_subtraction_mgtpf_id(common, pf_id_tmp); + sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + diff; + if (func_id >= sriov_info->start_vf_func_id && + func_id < sriov_info->start_vf_func_id + + sriov_info->num_vfs) { + *pfid = pf_id_tmp; + *vfid = func_id - sriov_info->start_vf_func_id; + return 0; + } + } + + return U32_MAX; +} + +static int func_id_to_bdf(void *p, u16 func_id, u8 *bus, u8 *dev, u8 *function) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_sriov_info *sriov_info; + int pfid = U32_MAX; + int vfid = U32_MAX; + int diff; + u8 pf_bus, pf_devfn, devfn; + + if (nbl_res_func_id_to_pfvfid(p, func_id, &pfid, &vfid)) + return U32_MAX; + + diff = nbl_common_pf_id_subtraction_mgtpf_id(common, pfid); + sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + diff; + pf_bus = PCI_BUS_NUM(sriov_info->bdf); + pf_devfn = sriov_info->bdf & 0xff; + + if (vfid != U32_MAX) { + *bus = pf_bus + ((pf_devfn + sriov_info->offset + + sriov_info->stride * vfid) >> + 8); + devfn = (pf_devfn + sriov_info->offset + + sriov_info->stride * vfid) & + 0xff; + } else { + *bus = pf_bus; + devfn = pf_devfn; + } + + *dev = PCI_SLOT(devfn); + *function = PCI_FUNC(devfn); + return 0; +} + +static u16 pfvfid_to_func_id(void *p, int pfid, int vfid) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_sriov_info *sriov_info; + int diff; + + if (vfid == U32_MAX) + return pfid; + + diff = nbl_common_pf_id_subtraction_mgtpf_id(common, pfid); + sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + diff; + + return sriov_info->start_vf_func_id + vfid; +} + +static u64 get_func_bar_base_addr(void *p, u16 func_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_sriov_info *sriov_info; + u64 base_addr = 0; + int pfid = U32_MAX; + int vfid = U32_MAX; + int diff; + + if (nbl_res_func_id_to_pfvfid(p, func_id, &pfid, &vfid)) + return 0; + + diff = nbl_common_pf_id_subtraction_mgtpf_id(common, pfid); + sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + diff; + if (!sriov_info->pf_bar_start) { + nbl_err(common, + "Try to get bar addr for func %d, but PF_%d sriov not init", + func_id, pfid); + return 0; + } + + if (vfid == U32_MAX) + base_addr = sriov_info->pf_bar_start; + else + base_addr = sriov_info->vf_bar_start + + sriov_info->vf_bar_len * vfid; + + nbl_debug(common, "pfid %d vfid %d base_addr %llx\n", pfid, vfid, + base_addr); + return base_addr; +} + +static u8 vsi_id_to_eth_id(void *p, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + + if (eth_info) + return eth_info + ->eth_id[nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id)]; + else + return 0; +} + +static u8 eth_id_to_pf_id(void *p, u8 eth_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + int i; + u8 pf_id_offset = 0; + + for_each_set_bit(i, eth_info->eth_bitmap, NBL_MAX_ETHERNET) { + if (i == eth_id) + break; + pf_id_offset++; + } + + return pf_id_offset + NBL_COMMON_TO_MGT_PF(common); +} + +int nbl_res_func_id_to_pfvfid(struct nbl_resource_mgt *res_mgt, u16 func_id, + int *pfid, int *vfid) +{ + if (!res_mgt->common_ops.func_id_to_pfvfid) + return func_id_to_pfvfid(res_mgt, func_id, pfid, vfid); + + return res_mgt->common_ops.func_id_to_pfvfid(res_mgt, func_id, pfid, + vfid); +} + +u16 nbl_res_pfvfid_to_func_id(struct nbl_resource_mgt *res_mgt, int pfid, + int vfid) +{ + if (!res_mgt->common_ops.pfvfid_to_func_id) + return pfvfid_to_func_id(res_mgt, pfid, vfid); + + return res_mgt->common_ops.pfvfid_to_func_id(res_mgt, pfid, vfid); +} + +u16 nbl_res_pfvfid_to_vsi_id(struct nbl_resource_mgt *res_mgt, int pfid, + int vfid, u16 type) +{ + if (!res_mgt->common_ops.pfvfid_to_vsi_id) + return pfvfid_to_vsi_id(res_mgt, pfid, vfid, type); + + return res_mgt->common_ops.pfvfid_to_vsi_id(res_mgt, pfid, vfid, type); +} + +int nbl_res_func_id_to_bdf(struct nbl_resource_mgt *res_mgt, u16 func_id, + u8 *bus, u8 *dev, u8 *function) +{ + if (!res_mgt->common_ops.func_id_to_bdf) + return func_id_to_bdf(res_mgt, func_id, bus, dev, function); + + return res_mgt->common_ops.func_id_to_bdf(res_mgt, func_id, bus, dev, + function); +} + +u16 nbl_res_vsi_id_to_func_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id) +{ + if (!res_mgt->common_ops.vsi_id_to_func_id) + return vsi_id_to_func_id(res_mgt, vsi_id); + + return res_mgt->common_ops.vsi_id_to_func_id(res_mgt, vsi_id); +} + +int nbl_res_vsi_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id) +{ + if (!res_mgt->common_ops.vsi_id_to_pf_id) + return vsi_id_to_pf_id(res_mgt, vsi_id); + + return res_mgt->common_ops.vsi_id_to_pf_id(res_mgt, vsi_id); +} + +u16 nbl_res_func_id_to_vsi_id(struct nbl_resource_mgt *res_mgt, u16 func_id, + u16 type) +{ + if (!res_mgt->common_ops.func_id_to_vsi_id) + return func_id_to_vsi_id(res_mgt, func_id, type); + + return res_mgt->common_ops.func_id_to_vsi_id(res_mgt, func_id, type); +} + +u64 nbl_res_get_func_bar_base_addr(struct nbl_resource_mgt *res_mgt, + u16 func_id) +{ + if (!res_mgt->common_ops.get_func_bar_base_addr) + return get_func_bar_base_addr(res_mgt, func_id); + + return res_mgt->common_ops.get_func_bar_base_addr(res_mgt, func_id); +} + +u8 nbl_res_vsi_id_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id) +{ + if (!res_mgt->common_ops.vsi_id_to_eth_id) + return vsi_id_to_eth_id(res_mgt, vsi_id); + + return res_mgt->common_ops.vsi_id_to_eth_id(res_mgt, vsi_id); +} + +u8 nbl_res_eth_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u8 eth_id) +{ + if (!res_mgt->common_ops.eth_id_to_pf_id) + return eth_id_to_pf_id(res_mgt, eth_id); + + return res_mgt->common_ops.eth_id_to_pf_id(res_mgt, eth_id); +} + +bool nbl_res_get_fix_capability(void *priv, enum nbl_fix_cap_type cap_type) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + + return test_bit(cap_type, res_mgt->fix_capability); +} + +void nbl_res_set_fix_capability(struct nbl_resource_mgt *res_mgt, + enum nbl_fix_cap_type cap_type) +{ + set_bit(cap_type, res_mgt->fix_capability); +} + +void nbl_res_pf_dev_vsi_type_to_hw_vsi_type(u16 src_type, + enum nbl_vsi_serv_type *dst_type) +{ + if (src_type == NBL_VSI_DATA) + *dst_type = NBL_VSI_SERV_PF_DATA_TYPE; + else if (src_type == NBL_VSI_CTRL) + *dst_type = NBL_VSI_SERV_PF_CTLR_TYPE; +} + +bool nbl_res_vf_is_active(void *priv, u16 func_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_resource_info *resource_info = res_mgt->resource_info; + + return test_bit(func_id, resource_info->func_bitmap); +} + +void nbl_res_set_hw_status(void *priv, enum nbl_hw_status hw_status) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + hw_ops->set_hw_status(NBL_RES_MGT_TO_HW_PRIV(res_mgt), hw_status); +} + +int nbl_res_get_pf_vf_num(void *priv, u16 pf_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_sriov_info *sriov_info; + + if (pf_id >= NBL_RES_MGT_TO_PF_NUM(res_mgt)) + return -1; + + sriov_info = NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) + pf_id; + if (!sriov_info->num_vfs) + return -1; + + return sriov_info->num_vfs; +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h new file mode 100644 index 000000000000..e90d25e6bc20 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h @@ -0,0 +1,860 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_RESOURCE_H_ +#define _NBL_RESOURCE_H_ + +#include "nbl_core.h" +#include "nbl_hw.h" + +#define NBL_RES_MGT_TO_COMMON(res_mgt) ((res_mgt)->common) +#define NBL_RES_MGT_TO_COMMON_OPS(res_mgt) (&((res_mgt)->common_ops)) +#define NBL_RES_MGT_TO_DEV(res_mgt) \ + NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt)) +#define NBL_RES_MGT_TO_DMA_DEV(res_mgt) \ + NBL_COMMON_TO_DMA_DEV(NBL_RES_MGT_TO_COMMON(res_mgt)) +#define NBL_RES_MGT_TO_INTR_MGT(res_mgt) ((res_mgt)->intr_mgt) +#define NBL_RES_MGT_TO_QUEUE_MGT(res_mgt) ((res_mgt)->queue_mgt) +#define NBL_RES_MGT_TO_TXRX_MGT(res_mgt) ((res_mgt)->txrx_mgt) +#define NBL_RES_MGT_TO_FLOW_MGT(res_mgt) ((res_mgt)->flow_mgt) +#define NBL_RES_MGT_TO_VSI_MGT(res_mgt) ((res_mgt)->vsi_mgt) +#define NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt) ((res_mgt)->adminq_mgt) +#define NBL_RES_MGT_TO_INTR_MGT(res_mgt) ((res_mgt)->intr_mgt) +#define NBL_RES_MGT_TO_PROD_OPS(res_mgt) ((res_mgt)->product_ops) +#define NBL_RES_MGT_TO_RES_INFO(res_mgt) ((res_mgt)->resource_info) +#define NBL_RES_MGT_TO_SRIOV_INFO(res_mgt) \ + (NBL_RES_MGT_TO_RES_INFO(res_mgt)->sriov_info) +#define NBL_RES_MGT_TO_ETH_INFO(res_mgt) \ + (NBL_RES_MGT_TO_RES_INFO(res_mgt)->eth_info) +#define NBL_RES_MGT_TO_VSI_INFO(res_mgt) \ + (NBL_RES_MGT_TO_RES_INFO(res_mgt)->vsi_info) +#define NBL_RES_MGT_TO_PF_NUM(res_mgt) \ + (NBL_RES_MGT_TO_RES_INFO(res_mgt)->max_pf) +#define NBL_RES_MGT_TO_USTORE_STATS(res_mgt) \ + (NBL_RES_MGT_TO_RES_INFO(res_mgt)->ustore_stats) + +#define NBL_RES_MGT_TO_HW_OPS_TBL(res_mgt) ((res_mgt)->hw_ops_tbl) +#define NBL_RES_MGT_TO_HW_OPS(res_mgt) (NBL_RES_MGT_TO_HW_OPS_TBL(res_mgt)->ops) +#define NBL_RES_MGT_TO_HW_PRIV(res_mgt) \ + (NBL_RES_MGT_TO_HW_OPS_TBL(res_mgt)->priv) +#define NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt) ((res_mgt)->chan_ops_tbl) +#define NBL_RES_MGT_TO_CHAN_OPS(res_mgt) \ + (NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->ops) +#define NBL_RES_MGT_TO_CHAN_PRIV(res_mgt) \ + (NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->priv) +#define NBL_RES_MGT_TO_TX_RING(res_mgt, index) \ + (NBL_RES_MGT_TO_TXRX_MGT(res_mgt)->tx_rings[(index)]) +#define NBL_RES_MGT_TO_RX_RING(res_mgt, index) \ + (NBL_RES_MGT_TO_TXRX_MGT(res_mgt)->rx_rings[(index)]) +#define NBL_RES_MGT_TO_VECTOR(res_mgt, index) \ + (NBL_RES_MGT_TO_TXRX_MGT(res_mgt)->vectors[(index)]) + +#define NBL_RES_BASE_QID(res_mgt) NBL_RES_MGT_TO_RES_INFO(res_mgt)->base_qid +#define NBL_RES_NOFITY_QID(res_mgt, local_qid) \ + (NBL_RES_BASE_QID(res_mgt) * 2 + (local_qid)) + +#define NBL_MAX_NET_ID NBL_MAX_FUNC +#define NBL_MAX_JUMBO_FRAME_SIZE (9600) +#define NBL_PKT_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) + +#define NBL_TPID_PORT_NUM (1031) +#define NBL_VLAN_TPYE (0) +#define NBL_QINQ_TPYE (1) + +/* --------- QUEUE ---------- */ +#define NBL_MAX_TXRX_QUEUE (2048) +#define NBL_DEFAULT_DESC_NUM (1024) +#define NBL_MAX_TXRX_QUEUE_PER_FUNC (256) + +#define NBL_DEFAULT_REP_HW_QUEUE_NUM (16) +#define NBL_DEFAULT_PF_HW_QUEUE_NUM (16) +#define NBL_DEFAULT_USER_HW_QUEUE_NUM (16) +#define NBL_DEFAULT_VF_HW_QUEUE_NUM (2) +#define NBL_VSI_PF_LEGACY_QUEUE_NUM_MAX \ + (NBL_MAX_TXRX_QUEUE_PER_FUNC - NBL_DEFAULT_REP_HW_QUEUE_NUM) + +#define NBL_SPECIFIC_VSI_NET_ID_OFFSET (4) +#define NBL_MAX_CACHE_SIZE (256) +#define NBL_MAX_BATCH_DESC (64) + +enum nbl_qid_map_table_type { + NBL_MASTER_QID_MAP_TABLE, + NBL_SLAVE_QID_MAP_TABLE, + NBL_QID_MAP_TABLE_MAX +}; + +struct nbl_queue_vsi_info { + u32 curr_qps; + u16 curr_qps_static; /* This will not be reset when netdev down */ + u16 vsi_index; + u16 vsi_id; + u16 rss_ret_base; + u16 rss_entry_size; + u16 net_id; + u16 queue_offset; + u16 queue_num; + bool rss_vld; + bool vld; +}; + +struct nbl_queue_info { + struct nbl_queue_vsi_info vsi_info[NBL_VSI_MAX]; + u64 notify_addr; + u32 qid_map_index; + u16 num_txrx_queues; + u16 rss_ret_base; + u16 *txrx_queues; + u16 *queues_context; + u32 *uvn_stat_pkt_drop; + u16 rss_entry_size; + u16 split; + u32 curr_qps; + u16 queue_size; +}; + +struct nbl_adapt_desc_gother { + u16 level; + u32 uvn_desc_rd_entry; + u64 get_desc_stats_jiffies; +}; + +struct nbl_queue_mgt { + DECLARE_BITMAP(txrx_queue_bitmap, NBL_MAX_TXRX_QUEUE); + DECLARE_BITMAP(rss_ret_bitmap, NBL_EPRO_RSS_RET_TBL_DEPTH); + struct nbl_qid_map_table qid_map_table[NBL_QID_MAP_TABLE_ENTRIES]; + struct nbl_queue_info queue_info[NBL_MAX_FUNC]; + u16 net_id_ref_vsinum[NBL_MAX_NET_ID]; + u32 total_qid_map_entries; + int qid_map_select; + bool qid_map_ready; + u32 qid_map_tail[NBL_QID_MAP_TABLE_MAX]; + struct nbl_adapt_desc_gother adapt_desc_gother; +}; + +/* --------- INTERRUPT ---------- */ +#define NBL_MAX_OTHER_INTERRUPT 1024 +#define NBL_MAX_NET_INTERRUPT 4096 + +struct nbl_msix_map { + u16 valid:1; + u16 global_msix_index:13; + u16 rsv:2; +}; + +struct nbl_msix_map_table { + struct nbl_msix_map *base_addr; + dma_addr_t dma; + size_t size; +}; + +struct nbl_func_interrupt_resource_mng { + u16 num_interrupts; + u16 num_net_interrupts; + u16 msix_base; + u16 msix_max; + u16 *interrupts; + struct nbl_msix_map_table msix_map_table; +}; + +struct nbl_interrupt_mgt { + DECLARE_BITMAP(interrupt_net_bitmap, NBL_MAX_NET_INTERRUPT); + DECLARE_BITMAP(interrupt_others_bitmap, NBL_MAX_OTHER_INTERRUPT); + struct nbl_func_interrupt_resource_mng func_intr_res[NBL_MAX_FUNC]; +}; + +/* --------- TXRX ---------- */ +struct nbl_txrx_vsi_info { + u16 ring_offset; + u16 ring_num; +}; + +struct nbl_ring_desc { + /* buffer address */ + __le64 addr; + /* buffer length */ + __le32 len; + /* buffer ID */ + __le16 id; + /* the flags depending on descriptor type */ + __le16 flags; +}; + +struct nbl_tx_buffer { + struct nbl_ring_desc *next_to_watch; + union { + struct sk_buff *skb; + }; + dma_addr_t dma; + u32 len; + + unsigned int bytecount; + unsigned short gso_segs; + bool page; + u32 tx_flags; +}; + +struct nbl_dma_info { + dma_addr_t addr; + struct page *page; + u32 size; +}; + +struct nbl_page_cache { + u32 head; + u32 tail; + struct nbl_dma_info page_cache[NBL_MAX_CACHE_SIZE]; +}; + +struct nbl_rx_buffer { + struct nbl_dma_info *di; + u16 offset; + u16 rx_pad; + u16 size; + bool last_in_page; + bool first_in_page; +}; + +struct nbl_res_vector { + struct nbl_napi_struct nbl_napi; + struct nbl_res_tx_ring *tx_ring; + struct nbl_res_rx_ring *rx_ring; + u8 __iomem *irq_enable_base; + u32 irq_data; + bool started; + bool net_msix_mask_en; +}; + +struct nbl_res_tx_ring { + /*data path*/ + struct nbl_ring_desc *desc; + struct nbl_tx_buffer *tx_bufs; + struct device *dma_dev; + struct net_device *netdev; + u8 __iomem *notify_addr; + struct nbl_queue_stats stats; + struct u64_stats_sync syncp; + struct nbl_tx_queue_stats tx_stats; + enum nbl_product_type product_type; + u16 queue_index; + u16 desc_num; + u16 notify_qid; + u16 avail_used_flags; + /* device ring wrap counter */ + bool used_wrap_counter; + u16 next_to_use; + u16 next_to_clean; + u16 tail_ptr; + u16 mode; + u16 vlan_tci; + u16 vlan_proto; + u8 eth_id; + u8 extheader_tx_len; + + /* control path */ + // dma for desc[] + dma_addr_t dma; + // size for desc[] + unsigned int size; + bool valid; + + struct nbl_txrx_vsi_info *vsi_info; +} ____cacheline_internodealigned_in_smp; + +struct nbl_res_rx_ring { + /* data path */ + struct nbl_ring_desc *desc; + struct nbl_rx_buffer *rx_bufs; + struct nbl_dma_info *di; + struct device *dma_dev; + struct net_device *netdev; + struct page_pool *page_pool; + struct nbl_queue_stats stats; + struct nbl_rx_queue_stats rx_stats; + struct u64_stats_sync syncp; + struct nbl_page_cache page_cache; + + enum nbl_product_type product_type; + u32 buf_len; + u16 avail_used_flags; + bool used_wrap_counter; + u8 nid; + u16 next_to_use; + u16 next_to_clean; + u16 tail_ptr; + u16 mode; + u16 desc_num; + u16 queue_index; + u16 vlan_tci; + u16 vlan_proto; + bool linear_skb; + + /* control path */ + struct nbl_common_info *common; + void *txrx_mgt; + // dma for desc[] + dma_addr_t dma; + // size for desc[] + unsigned int size; + bool valid; + u16 notify_qid; + + u16 frags_num_per_page; +} ____cacheline_internodealigned_in_smp; + +struct nbl_txrx_mgt { + struct nbl_res_vector **vectors; + struct nbl_res_tx_ring **tx_rings; + struct nbl_res_rx_ring **rx_rings; + struct nbl_txrx_vsi_info vsi_info[NBL_VSI_MAX]; + u16 tx_ring_num; + u16 rx_ring_num; +}; + +struct nbl_vsi_mgt { +}; + +struct nbl_adminq_mgt { + u32 fw_last_hb_seq; + unsigned long fw_last_hb_time; + struct work_struct eth_task; + struct nbl_resource_mgt *res_mgt; + u8 module_inplace_changed[NBL_MAX_ETHERNET]; + u8 link_state_changed[NBL_MAX_ETHERNET]; + bool fw_resetting; + struct wait_queue_head wait_queue; + struct mutex eth_lock; /* To prevent link_state_changed mismodified. */ + void *cmd_filter; +}; + +/* --------- FLOW ---------- */ +#define NBL_FEM_HT_PP0_LEN (2 * 1024) +#define NBL_MACVLAN_TABLE_LEN (4096 * 2) + +enum nbl_next_stg_id_e { + NBL_NEXT_STG_PA = 1, + NBL_NEXT_STG_IPRO = 2, + NBL_NEXT_STG_PP0_S0 = 3, + NBL_NEXT_STG_PP0_S1 = 4, + NBL_NEXT_STG_PP1_S0 = 5, + NBL_NEXT_STG_PP1_S1 = 6, + NBL_NEXT_STG_PP2_S0 = 7, + NBL_NEXT_STG_PP2_S1 = 8, + NBL_NEXT_STG_MCC = 9, + NBL_NEXT_STG_ACL_S0 = 10, + NBL_NEXT_STG_ACL_S1 = 11, + NBL_NEXT_STG_EPRO = 12, + NBL_NEXT_STG_BYPASS = 0xf, +}; + +enum { + NBL_FLOW_UP_TNL, + NBL_FLOW_UP, + NBL_FLOW_DOWN, + NBL_FLOW_MACVLAN_MAX, + NBL_FLOW_LLDP_LACP_UP = NBL_FLOW_MACVLAN_MAX, + NBL_FLOW_L2_UP_MULTI_MCAST, + NBL_FLOW_L3_UP_MULTI_MCAST, + NBL_FLOW_UP_MULTI_MCAST_END, + NBL_FLOW_L2_DOWN_MULTI_MCAST = NBL_FLOW_UP_MULTI_MCAST_END, + NBL_FLOW_L3_DOWN_MULTI_MCAST, + NBL_FLOW_DOWN_MULTI_MCAST_END, + NBL_FLOW_TYPE_MAX = NBL_FLOW_DOWN_MULTI_MCAST_END, +}; + +struct nbl_flow_ht_key { + u16 vid; + u16 ht_other_index; + u32 kt_index; +}; + +struct nbl_flow_ht_tbl { + struct nbl_flow_ht_key key[4]; + u32 ref_cnt; +}; + +struct nbl_flow_ht_mng { + struct nbl_flow_ht_tbl *hash_map[NBL_FEM_HT_PP0_LEN]; +}; + +struct nbl_flow_fem_entry { + s32 type; + u16 flow_id; + u16 ht0_hash; + u16 ht1_hash; + u16 hash_table; + u16 hash_bucket; + u16 tcam_index; + u8 tcam_flag; + u8 flow_type; +}; + +struct nbl_flow_mcc_node { + struct list_head node; + u16 data; + u16 mcc_id; + u16 mcc_action; + bool mcc_head; + u8 type; +}; + +struct nbl_flow_mcc_group { + struct list_head group_node; + /* list_head for mcc_node_list */ + struct list_head mcc_node; + struct list_head mcc_head; + unsigned long *vsi_bitmap; + u32 nbits; + u32 vsi_base; + u32 vsi_num; + u32 ref_cnt; + u16 up_mcc_id; + u16 down_mcc_id; + bool multi; +}; + +struct nbl_flow_switch_res { + void *mac_hash_tbl; + unsigned long *vf_bitmap; + struct list_head allmulti_head; + struct list_head allmulti_list; + struct list_head mcc_group_head; + struct nbl_flow_fem_entry allmulti_up[2]; + struct nbl_flow_fem_entry allmulti_down[2]; + u16 vld; + u16 network_status; + u16 pfc_mode; + u16 bp_mode; + u16 allmulti_first_mcc; + u16 num_vfs; + u16 active_vfs; + u8 ether_id; +}; + +struct nbl_flow_lacp_rule { + struct nbl_flow_fem_entry entry; + struct list_head node; + u16 vsi; +}; + +struct nbl_flow_lldp_rule { + struct nbl_flow_fem_entry entry; + struct list_head node; + u16 vsi; +}; + +#define NBL_FLOW_PMD_ND_UPCALL_NA (0) +#define NBL_FLOW_PMD_ND_UPCALL_NS (1) +#define NBL_FLOW_PMD_ND_UPCALL_FLOW_NUM (2) + +struct nbl_flow_mgt { + unsigned long *flow_id_bitmap; + unsigned long *mcc_id_bitmap; + DECLARE_BITMAP(tcam_id, NBL_TCAM_TABLE_LEN); + struct nbl_flow_ht_mng pp0_ht0_mng; + struct nbl_flow_ht_mng pp0_ht1_mng; + struct nbl_flow_switch_res switch_res[NBL_MAX_ETHERNET]; + struct list_head lldp_list; + struct list_head lacp_list; + struct list_head ul4s_head; + struct list_head dprbac_head; + u32 pp_tcam_count; + u32 flow_id_cnt; + u16 vsi_max_per_switch; +}; + +#define NBL_FLOW_INIT_BIT BIT(1) +#define NBL_FLOW_AVAILABLE_BIT BIT(2) +#define NBL_ALL_PROFILE_NUM (64) +#define NBL_ASSOC_PROFILE_GRAPH_NUM (32) +#define NBL_ASSOC_PROFILE_NUM (16) +#define NBL_ASSOC_PROFILE_STAGE_NUM (8) +#define NBL_PROFILE_KEY_MAX_NUM (32) +#define NBL_FLOW_KEY_NAME_SIZE (32) +#define NBL_FLOW_INDEX_LEN 131072 +#define NBL_FLOW_TABLE_NUM (64 * 1024) + +#define NBL_AT_MAX_NUM 8 +#define NBL_MAX_ACTION_NUM 16 +#define NBL_ACT_BYTE_LEN 32 + +enum nbl_flow_key_type { + NBL_FLOW_KEY_TYPE_PID, // profile id + NBL_FLOW_KEY_TYPE_ACTION, // AT action data, in 22 bits + NBL_FLOW_KEY_TYPE_PHV, // keys: PHV fields, inport, tab_index + // and other extracted 16 bits actions + NBL_FLOW_KEY_TYPE_MASK, // mask 4 bits + NBL_FLOW_KEY_TYPE_BTS // bit setter +}; + +#define NBL_PP0_KT_NUM (0) +#define NBL_PP1_KT_NUM (24 * 1024) +#define NBL_PP2_KT_NUM (96 * 1024) +#define NBL_PP0_KT_OFFSET (120 * 1024) +#define NBL_PP1_KT_OFFSET (96 * 1024) +#define NBL_FEM_HT_PP0_LEN (2 * 1024) +#define NBL_FEM_HT_PP1_LEN (6 * 1024) +#define NBL_FEM_HT_PP2_LEN (16 * 1024) +#define NBL_FEM_HT_PP0_DEPTH (2 * 1024) +#define NBL_FEM_HT_PP1_DEPTH (6 * 1024) +#define NBL_FEM_HT_PP2_DEPTH (0) /* 16K, treat as zero */ +#define NBL_FEM_AT_PP1_LEN (12 * 1024) +#define NBL_FEM_AT2_PP1_LEN (4 * 1024) +#define NBL_FEM_AT_PP2_LEN (64 * 1024) +#define NBL_FEM_AT2_PP2_LEN (16 * 1024) +#define NBL_TC_MCC_TBL_DEPTH (4096) +#define NBL_TC_ENCAP_TBL_DEPTH (4 * 1024) + +struct nbl_flow_key_info { + bool valid; + enum nbl_flow_key_type key_type; + u16 offset; + u16 length; + u8 key_id; + char name[NBL_FLOW_KEY_NAME_SIZE]; +}; + +struct nbl_profile_msg { + bool valid; + // pp loopback or not + bool pp_mode; + bool key_full; + bool pt_cmd; + bool from_start; + bool to_end; + bool need_upcall; + + // id in range of 0 to 2 + u8 pp_id; + + // id in range of 0 to 15 + u8 profile_id; + + // id in range of 0 to 47 + u8 g_profile_id; + + // count of valid profile keys in the flow_keys list + u8 key_count; + u16 key_len; + u64 key_flag; + u8 act_count; + u8 pre_assoc_profile_id[NBL_ASSOC_PROFILE_NUM]; + u8 next_assoc_profile_id[NBL_ASSOC_PROFILE_NUM]; + // store all profile key info + struct nbl_flow_key_info flow_keys[NBL_PROFILE_KEY_MAX_NUM]; +}; + +/* --------- INFO ---------- */ +#define NBL_MAX_VF (NBL_MAX_FUNC - NBL_MAX_PF) + +struct nbl_sriov_info { + unsigned int bdf; + unsigned int num_vfs; + unsigned int start_vf_func_id; + unsigned short offset; + unsigned short stride; + unsigned short active_vf_num; + u64 vf_bar_start; + u64 vf_bar_len; + u64 pf_bar_start; +}; + +struct nbl_eth_info { + DECLARE_BITMAP(eth_bitmap, NBL_MAX_ETHERNET); + u64 port_caps[NBL_MAX_ETHERNET]; + u64 port_advertising[NBL_MAX_ETHERNET]; + u64 port_lp_advertising[NBL_MAX_ETHERNET]; + u32 link_speed[NBL_MAX_ETHERNET]; /* in Mbps units */ + u8 active_fc[NBL_MAX_ETHERNET]; + u8 active_fec[NBL_MAX_ETHERNET]; + u8 link_state[NBL_MAX_ETHERNET]; + u8 module_inplace[NBL_MAX_ETHERNET]; + u8 port_type[NBL_MAX_ETHERNET]; /* enum nbl_port_type */ + u8 port_max_rate[NBL_MAX_ETHERNET]; /* enum nbl_port_max_rate */ + u8 module_repluged[NBL_MAX_ETHERNET]; + + u8 pf_bitmap[NBL_MAX_ETHERNET]; + u8 eth_num; + u8 resv[3]; + u8 eth_id[NBL_MAX_PF]; + u8 logic_eth_id[NBL_MAX_PF]; + u64 link_down_count[NBL_MAX_ETHERNET]; +}; + +enum nbl_vsi_serv_type { + NBL_VSI_SERV_PF_DATA_TYPE, + NBL_VSI_SERV_PF_CTLR_TYPE, + NBL_VSI_SERV_PF_USER_TYPE, + NBL_VSI_SERV_PF_XDP_TYPE, + NBL_VSI_SERV_VF_DATA_TYPE, + /* use for pf_num > eth_num, the extra pf belong pf0's switch */ + NBL_VSI_SERV_PF_EXTRA_TYPE, + NBL_VSI_SERV_MAX_TYPE, +}; + +struct nbl_vsi_serv_info { + u16 base_id; + u16 num; +}; + +struct nbl_vsi_mac_info { + u16 vlan_proto; + u16 vlan_tci; + int rate; + u8 mac[ETH_ALEN]; + bool trusted; +}; + +struct nbl_vsi_info { + u16 num; + struct nbl_vsi_serv_info serv_info[NBL_MAX_ETHERNET] + [NBL_VSI_SERV_MAX_TYPE]; + struct nbl_vsi_mac_info mac_info[NBL_MAX_FUNC]; +}; + +struct nbl_net_ring_num_info { + u16 pf_def_max_net_qp_num; + u16 vf_def_max_net_qp_num; + u16 net_max_qp_num[NBL_MAX_FUNC]; +}; + +/* Host Board Configuration */ +/* 256 Byte */ +struct nbl_host_board_config { + /* dw0/1 */ + u8 version; + char magic[7]; + + /* dw2 */ + u8 board_id; + u8 def_tlv_index; + u8 spi_flash_type; + u8 dw2_rsv_zero; // 0x00 + + /* dw3 -bits */ + u32 port_type: 1; // 0: optical, 1: electrical + u32 port_number: 7; + u32 port_speed: 2; + u32 port_module_type: 3; // 0: SFP, 1: QSFP, 2: PHY + u32 upper_config: 1; // 0: lower, 1: upper + u32 dw3_bits_rsv1: 1; + u32 i2c_mdio: 1; // 0: i2c, 1: mdio + u32 mdio_pin: 1; // 0: N to N, 1: 1 to N + u32 pam4_supported: 1; // 0: no, 1: yes + u32 dual_bc_supported: 1; // 0: no, 1: yes + u32 bc_index: 1; + u32 disable_crypto: 1; // 0: no, 1: yes + u32 ocp_card: 1; // 0: no, 1: yes + u32 oem: 1; // 0: no, 1: yes + u32 dw3_bits_rsv2: 9; // 0 + + /* dw4 - bits */ + u32 dw4_bits_rsv; // 0 + + /* dw5 */ + u8 pcie_pf_mask; // bitmap + u8 pcie_vpd_mask; // bitmap + u8 pcie_lanes; // valid value: 1/2/4/8/16 + u8 pcie_speed; // valid value: 1/2/3/4 + + /* dw6 */ + u8 eth_lane_mask; // bitmap + u8 eth_mac_mask; // bitmap + u8 phy_type; + u8 board_version; + + /* dw7 */ + u8 ncsi_package_id; + u8 fru_eeprom_i2c_addr; + u8 ext_gpio_i2c_addr0; + u8 ext_gpio_i2c_addr1; + + /* dw8 */ + u8 phy_mdio_addr[4]; + + /* dw9~12 */ + u16 pcie_vendor_id; // 0x1F0F + u16 pcie_device_id; + u32 pcie_class_rev; // 0x02000000 + u16 pcie_sub_vendor_id; // 0x1F0F + u16 pcie_sub_device_id; // 0x0001 + u16 pcie_vf_device_id; // 0x340D + u16 pcie_vf_sub_device_id; // 0x0001 + + /* dw13 */ + u16 pf_max_vfs; + u16 device_max_qps; // 2048 + + /* dw14 */ + u16 smbus_addr0; + u16 smbus_addr1; + + /* dw15 */ + u32 temp_i2c_addr: 8; // onboard temperature sensor + u32 temp_type: 5; + u32 temp_port_index: 3; + u32 voltage_i2c_addr: 8; // onboard voltage sensor + u32 voltage_type: 5; + u32 voltage_port_index: 3; + + /* dw16~44 */ + u64 port_capability[2]; // dw16~19 + u8 port_gpio[4][16]; // dw20~35 + u8 misc_gpio[16]; // dw36~39 + + /* dw40~49 */ + char controller_part_no[8]; // dw40/41 + char board_pn[12]; // dw42~44 + char product_name[20]; // dw45~49 + + /* dw50~59 */ + u32 reserved_zero0; // 0x00 + u32 reserved_zero1; + u32 reserved_zero2; + u32 reserved_zero3; + u32 reserved_zero4; + u32 reserved_zero5; + u32 reserved_zero6; + u32 reserved_zero7; + u32 reserved_zero8; + u32 reserved_zero9; + + /* dw60~63 */ + u32 reserved_one0; // 0xFF + u32 reserved_one1; + u32 reserved_one2; + u32 reserved_one3; +}; + +struct nbl_serial_number_info { + u8 len; + char sn[128]; +}; + +struct nbl_resource_info { + /* ctrl-dev owned pfs */ + DECLARE_BITMAP(func_bitmap, NBL_MAX_FUNC); + struct nbl_sriov_info *sriov_info; + struct nbl_eth_info *eth_info; + struct nbl_vsi_info *vsi_info; + u32 base_qid; + u32 max_vf_num; + + struct nbl_net_ring_num_info net_ring_num_info; + + /* for af use */ + u16 eth_mode; + u8 max_pf; + struct nbl_board_port_info board_info; + /* store all pf names for vf/rep device name use */ + char pf_name_list[NBL_MAX_PF][IFNAMSIZ]; + + u8 link_forced_info[NBL_MAX_FUNC]; + struct nbl_mtu_entry mtu_list[NBL_MAX_MTU_NUM]; + + struct nbl_ustore_stats *ustore_stats; +}; + +struct nbl_resource_common_ops { + u16 (*vsi_id_to_func_id)(void *res_mgt, u16 vsi_id); + int (*vsi_id_to_pf_id)(void *res_mgt, u16 vsi_id); + u16 (*vsi_id_to_vf_id)(void *res_mgt, u16 vsi_id); + u16 (*pfvfid_to_func_id)(void *res_mgt, int pfid, int vfid); + u16 (*pfvfid_to_vsi_id)(void *res_mgt, int pfid, int vfid, u16 type); + u16 (*func_id_to_vsi_id)(void *res_mgt, u16 func_id, u16 type); + int (*func_id_to_pfvfid)(void *res_mgt, u16 func_id, int *pfid, + int *vfid); + int (*func_id_to_bdf)(void *res_mgt, u16 func_id, u8 *bus, u8 *dev, + u8 *function); + u64 (*get_func_bar_base_addr)(void *res_mgt, u16 func_id); + u16 (*get_particular_queue_id)(void *res_mgt, u16 vsi_id); + u8 (*vsi_id_to_eth_id)(void *res_mgt, u16 vsi_id); + u8 (*eth_id_to_pf_id)(void *res_mgt, u8 eth_id); + u8 (*eth_id_to_lag_id)(void *res_mgt, u8 eth_id); + bool (*check_func_active_by_queue)(void *res_mgt, u16 func_id); + int (*get_queue_num)(void *res_mgt, u16 func_id, u16 *tx_queue_num, + u16 *rx_queue_num); +}; + +struct nbl_res_product_ops { + /* for queue */ + void (*queue_mgt_init)(struct nbl_queue_mgt *queue_mgt); + int (*setup_qid_map_table)(struct nbl_resource_mgt *res_mgt, + u16 func_id, u64 notify_addr); + void (*remove_qid_map_table)(struct nbl_resource_mgt *res_mgt, + u16 func_id); + int (*init_qid_map_table)(struct nbl_resource_mgt *res_mgt, + struct nbl_queue_mgt *queue_mgt, + struct nbl_hw_ops *hw_ops); + + /* for intr */ + void (*nbl_intr_mgt_init)(struct nbl_resource_mgt *res_mgt); +}; + +struct nbl_resource_mgt { + struct nbl_resource_common_ops common_ops; + struct nbl_common_info *common; + struct nbl_resource_info *resource_info; + struct nbl_channel_ops_tbl *chan_ops_tbl; + struct nbl_hw_ops_tbl *hw_ops_tbl; + struct nbl_queue_mgt *queue_mgt; + struct nbl_interrupt_mgt *intr_mgt; + struct nbl_txrx_mgt *txrx_mgt; + struct nbl_flow_mgt *flow_mgt; + struct nbl_vsi_mgt *vsi_mgt; + struct nbl_adminq_mgt *adminq_mgt; + struct nbl_res_product_ops *product_ops; + DECLARE_BITMAP(fix_capability, NBL_FIX_CAP_NBITS); +}; + +/* Mgt structure for each product. + * Every indivisual mgt must have the common mgt as its first member, and + * contains its unique data structure in the reset of it. + */ +struct nbl_resource_mgt_leonis { + struct nbl_resource_mgt res_mgt; +}; + +#define NBL_RES_FW_CMD_FILTER_MAX 8 +struct nbl_res_fw_cmd_filter { + int (*in)(struct nbl_resource_mgt *res_mgt, void *in, u16 in_len); + int (*out)(struct nbl_resource_mgt *res_mgt, void *in, u16 in_len, + void *out, u16 out_len); +}; + +u16 nbl_res_vsi_id_to_func_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id); +int nbl_res_vsi_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id); +u16 nbl_res_pfvfid_to_func_id(struct nbl_resource_mgt *res_mgt, int pfid, + int vfid); +u16 nbl_res_pfvfid_to_vsi_id(struct nbl_resource_mgt *res_mgt, int pfid, + int vfid, u16 type); +u16 nbl_res_func_id_to_vsi_id(struct nbl_resource_mgt *res_mgt, u16 func_id, + u16 type); +int nbl_res_func_id_to_pfvfid(struct nbl_resource_mgt *res_mgt, u16 func_id, + int *pfid, int *vfid); +u8 nbl_res_eth_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u8 eth_id); +u8 nbl_res_eth_id_to_lag_id(struct nbl_resource_mgt *res_mgt, u8 eth_id); +int nbl_res_func_id_to_bdf(struct nbl_resource_mgt *res_mgt, u16 func_id, + u8 *bus, u8 *dev, u8 *function); +u64 nbl_res_get_func_bar_base_addr(struct nbl_resource_mgt *res_mgt, + u16 func_id); +u8 nbl_res_vsi_id_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id); + +int nbl_adminq_mgt_start(struct nbl_resource_mgt *res_mgt); +void nbl_adminq_mgt_stop(struct nbl_resource_mgt *res_mgt); +int nbl_adminq_setup_ops(struct nbl_resource_ops *resource_ops); +bool nbl_res_get_fix_capability(void *priv, enum nbl_fix_cap_type cap_type); +void nbl_res_set_fix_capability(struct nbl_resource_mgt *res_mgt, + enum nbl_fix_cap_type cap_type); + +int nbl_res_open_sfp(struct nbl_resource_mgt *res_mgt, u8 eth_id); +int nbl_res_get_eth_mac(struct nbl_resource_mgt *res_mgt, u8 *mac, u8 eth_id); +void nbl_res_pf_dev_vsi_type_to_hw_vsi_type(u16 src_type, + enum nbl_vsi_serv_type *dst_type); +bool nbl_res_vf_is_active(void *priv, u16 func_id); +void nbl_res_set_hw_status(void *priv, enum nbl_hw_status hw_status); +int nbl_res_get_pf_vf_num(void *priv, u16 pf_id); + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h index 57d88ef0fb6d..853bb3022e51 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h @@ -77,6 +77,25 @@ do { \ #define NBL_COMMON_TO_BOARD_ID(common) ((common)->board_id) #define NBL_COMMON_TO_LOGIC_ETH_ID(common) ((common)->logic_eth_id) +#define NBL_ONE_ETHERNET_PORT (1) +#define NBL_TWO_ETHERNET_PORT (2) +#define NBL_FOUR_ETHERNET_PORT (4) +#define NBL_DEFAULT_VSI_ID_GAP (1024) +#define NBL_TWO_ETHERNET_VSI_ID_GAP (512) +#define NBL_FOUR_ETHERNET_VSI_ID_GAP (256) + +#define NBL_VSI_ID_GAP(m) \ + ({ \ + typeof(m) _m = (m); \ + _m == NBL_FOUR_ETHERNET_PORT ? \ + NBL_FOUR_ETHERNET_VSI_ID_GAP : \ + (_m == NBL_TWO_ETHERNET_PORT ? \ + NBL_TWO_ETHERNET_VSI_ID_GAP : \ + NBL_DEFAULT_VSI_ID_GAP); \ + }) + +#define NBL_INVALID_QUEUE_ID (0xFFFF) + struct nbl_common_info { struct pci_dev *pdev; struct device *dev; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h index 1096feea5ce6..243869883801 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h @@ -21,7 +21,10 @@ struct nbl_hw_ops { u16 (*get_mailbox_rx_tail_ptr)(void *priv); bool (*check_mailbox_dma_err)(void *priv, bool tx); u32 (*get_host_pf_mask)(void *priv); - + u32 (*get_host_pf_fid)(void *priv, u16 func_id); + u32 (*get_real_bus)(void *priv); + u64 (*get_pf_bar_addr)(void *priv, u16 func_id); + u64 (*get_vf_bar_addr)(void *priv, u16 func_id); void (*cfg_mailbox_qinfo)(void *priv, u16 func_id, u16 bus, u16 devid, u16 function); void (*config_adminq_rxq)(void *priv, dma_addr_t dma_addr, @@ -34,9 +37,19 @@ struct nbl_hw_ops { void (*update_adminq_queue_tail_ptr)(void *priv, u16 tail_ptr, u8 txrx); bool (*check_adminq_dma_err)(void *priv, bool tx); + u8 __iomem *(*get_hw_addr)(void *priv, size_t *size); void (*set_hw_status)(void *priv, enum nbl_hw_status hw_status); enum nbl_hw_status (*get_hw_status)(void *priv); - + void (*set_fw_ping)(void *priv, u32 ping); + u32 (*get_fw_pong)(void *priv); + void (*set_fw_pong)(void *priv, u32 pong); + int (*process_abnormal_event)(void *priv, + struct nbl_abnormal_event_info *info); + /* for board cfg */ + u32 (*get_fw_eth_num)(void *priv); + u32 (*get_fw_eth_map)(void *priv); + void (*get_board_info)(void *priv, struct nbl_board_port_info *board); + u32 (*get_quirks)(void *priv); }; struct nbl_hw_ops_tbl { diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h new file mode 100644 index 000000000000..b0cc6ac973f4 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_resource.h @@ -0,0 +1,183 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEF_RESOURCE_H_ +#define _NBL_DEF_RESOURCE_H_ + +#include "nbl_include.h" + +struct nbl_resource_pt_ops { + netdev_tx_t (*start_xmit)(struct sk_buff *skb, + struct net_device *netdev); + int (*napi_poll)(struct napi_struct *napi, int budget); +}; + +struct nbl_resource_ops { + int (*init_chip_module)(void *priv); + void (*deinit_chip_module)(void *priv); + void (*get_resource_pt_ops)(void *priv, + struct nbl_resource_pt_ops *pt_ops); + int (*queue_init)(void *priv); + int (*vsi_init)(void *priv); + int (*init_vf_msix_map)(void *priv, u16 func_id, bool enable); + int (*configure_msix_map)(void *priv, u16 func_id, u16 num_net_msix, + u16 num_others_msix, bool net_msix_mask_en); + int (*destroy_msix_map)(void *priv, u16 func_id); + int (*enable_mailbox_irq)(void *priv, u16 func_id, u16 vector_id, + bool enable_msix); + int (*enable_abnormal_irq)(void *p, u16 vector_id, bool enable_msix); + int (*enable_adminq_irq)(void *p, u16 vector_id, bool enable_msix); + u16 (*get_global_vector)(void *priv, u16 vsi_id, u16 local_vec_id); + u16 (*get_msix_entry_id)(void *priv, u16 vsi_id, u16 local_vec_id); + int (*get_mbx_irq_num)(void *priv); + int (*get_adminq_irq_num)(void *priv); + int (*get_abnormal_irq_num)(void *priv); + + int (*alloc_rings)(void *priv, struct net_device *netdev, + struct nbl_ring_param *param); + void (*remove_rings)(void *priv); + dma_addr_t (*start_tx_ring)(void *priv, u8 ring_index); + void (*stop_tx_ring)(void *priv, u8 ring_index); + dma_addr_t (*start_rx_ring)(void *priv, u8 ring_index, bool use_napi); + void (*stop_rx_ring)(void *priv, u8 ring_index); + void (*update_rx_ring)(void *priv, u16 index); + void (*kick_rx_ring)(void *priv, u16 index); + struct nbl_napi_struct *(*get_vector_napi)(void *priv, u16 index); + void (*set_vector_info)(void *priv, u8 __iomem *irq_enable_base, + u32 irq_data, u16 index, bool mask_en); + void (*register_vsi_ring)(void *priv, u16 vsi_index, u16 ring_offset, + u16 ring_num); + int (*register_net)(void *priv, u16 func_id, + struct nbl_register_net_param *register_param, + struct nbl_register_net_result *register_result); + int (*unregister_net)(void *priv, u16 func_id); + int (*alloc_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num); + void (*free_txrx_queues)(void *priv, u16 vsi_id); + int (*register_vsi2q)(void *priv, u16 vsi_index, u16 vsi_id, + u16 queue_offset, u16 queue_num); + int (*setup_q2vsi)(void *priv, u16 vsi_id); + void (*remove_q2vsi)(void *priv, u16 vsi_id); + int (*setup_rss)(void *priv, u16 vsi_id); + void (*remove_rss)(void *priv, u16 vsi_id); + int (*setup_queue)(void *priv, struct nbl_txrx_queue_param *param, + bool is_tx); + void (*remove_all_queues)(void *priv, u16 vsi_id); + int (*cfg_dsch)(void *priv, u16 vsi_id, bool vld); + int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps, + bool rss_indir_set); + void (*remove_cqs)(void *priv, u16 vsi_id); + void (*clear_queues)(void *priv, u16 vsi_id); + + u16 (*get_local_queue_id)(void *priv, u16 vsi_id, u16 global_queue_id); + u16 (*get_global_queue_id)(void *priv, u16 vsi_id, u16 local_queue_id); + + u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id, + u32 *irq_data); + + int (*add_macvlan)(void *priv, u8 *mac, u16 vlan, u16 vsi); + void (*del_macvlan)(void *priv, u8 *mac, u16 vlan, u16 vsi); + int (*add_lldp_flow)(void *priv, u16 vsi); + void (*del_lldp_flow)(void *priv, u16 vsi); + int (*add_multi_rule)(void *priv, u16 vsi); + void (*del_multi_rule)(void *priv, u16 vsi); + int (*add_multi_mcast)(void *priv, u16 vsi); + void (*del_multi_mcast)(void *priv, u16 vsi); + int (*setup_multi_group)(void *priv); + void (*remove_multi_group)(void *priv); + + void (*clear_flow)(void *priv, u16 vsi_id); + + u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type); + void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id, + u8 *logic_eth_id); + int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode); + u32 (*get_tx_headroom)(void *priv); + + void (*get_rep_queue_info)(void *priv, u16 *queue_num, u16 *queue_size); + int (*set_mtu)(void *priv, u16 vsi_id, u16 mtu); + int (*get_max_mtu)(void *priv); + void (*get_net_stats)(void *priv, struct nbl_stats *queue_stats); + void (*get_rxfh_indir_size)(void *priv, u16 vsi_id, + u32 *rxfh_indir_size); + int (*set_rxfh_indir)(void *priv, u16 vsi_id, const u32 *indir, + u32 indir_size); + void (*cfg_txrx_vlan)(void *priv, u16 vlan_tci, u16 vlan_proto, + u8 vsi_index); + + u8 __iomem *(*get_hw_addr)(void *priv, size_t *size); + u16 (*get_function_id)(void *priv, u16 vsi_id); + void (*get_real_bdf)(void *priv, u16 vsi_id, u8 *bus, u8 *dev, + u8 *function); + + int (*get_port_attributes)(void *priv); + int (*update_ring_num)(void *priv); + int (*set_ring_num)(void *priv, + struct nbl_cmd_net_ring_num *param); + int (*get_part_number)(void *priv, char *part_number); + int (*get_serial_number)(void *priv, char *serial_number); + int (*enable_port)(void *priv, bool enable); + + void (*recv_port_notify)(void *priv, void *data); + int (*get_link_state)(void *priv, u8 eth_id, + struct nbl_eth_link_info *eth_link_info); + int (*set_eth_mac_addr)(void *priv, u8 *mac, u8 eth_id); + int (*process_abnormal_event)(void *priv, + struct nbl_abnormal_event_info *info); + int (*set_wol)(void *priv, u8 eth_id, bool enable); + void (*adapt_desc_gother)(void *priv); + void (*flr_clear_net)(void *priv, u16 vfid); + void (*flr_clear_queues)(void *priv, u16 vfid); + void (*flr_clear_flows)(void *priv, u16 vfid); + void (*flr_clear_interrupt)(void *priv, u16 vfid); + u16 (*covert_vfid_to_vsi_id)(void *priv, u16 vfid); + void (*unmask_all_interrupts)(void *priv); + u16 (*get_vf_function_id)(void *priv, u16 vsi_id, int vf_id); + u16 (*get_vf_vsi_id)(void *priv, u16 vsi_id, int vf_id); + bool (*check_vf_is_active)(void *priv, u16 func_id); + int (*get_ustore_total_pkt_drop_stats)(void *priv, u8 eth_id, + struct nbl_ustore_stats *stat); + + bool (*check_fw_heartbeat)(void *priv); + bool (*check_fw_reset)(void *priv); + int (*set_sfp_state)(void *priv, u8 eth_id, u8 state); + int (*passthrough_fw_cmd)(void *priv, + struct nbl_passthrough_fw_cmd *param, + struct nbl_passthrough_fw_cmd *result); + int (*get_board_id)(void *priv); + + bool (*get_product_fix_cap)(void *priv, enum nbl_fix_cap_type cap_type); + + dma_addr_t (*restore_abnormal_ring)(void *priv, int ring_index, + int type); + int (*restart_abnormal_ring)(void *priv, int ring_index, int type); + int (*stop_abnormal_sw_queue)(void *priv, u16 local_queue_id, int type); + int (*stop_abnormal_hw_queue)(void *priv, u16 vsi_id, + u16 local_queue_id, int type); + int (*get_link_forced)(void *priv, u16 vsi_id); + int (*set_tx_rate)(void *priv, u16 func_id, int tx_rate, int burst); + int (*set_rx_rate)(void *priv, u16 func_id, int rx_rate, int burst); + + u16 (*get_vsi_global_queue_id)(void *priv, u16 vsi_id, u16 local_qid); + + void (*set_hw_status)(void *priv, enum nbl_hw_status hw_status); + void (*get_active_func_bitmaps)(void *priv, unsigned long *bitmap, + int max_func); + + void (*register_dev_name)(void *priv, u16 vsi_id, char *name); + void (*get_dev_name)(void *priv, u16 vsi_id, char *name); + + int (*check_flow_table_spec)(void *priv, u16 vlan_cnt, u16 unicast_cnt, + u16 multicast_cnt); +}; + +struct nbl_resource_ops_tbl { + struct nbl_resource_ops *ops; + void *priv; +}; + +int nbl_res_init_leonis(void *p, struct nbl_init_param *param); +void nbl_res_remove_leonis(void *p); +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index 64ac886f0ba2..8759ba3d478c 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -12,6 +12,9 @@ /* ------ Basic definitions ------- */ #define NBL_DRIVER_NAME "nbl_core" +#define NBL_PAIR_ID_GET_TX(id) ((id) * 2 + 1) +#define NBL_PAIR_ID_GET_RX(id) ((id) * 2) + #define NBL_MAX_PF 8 #define NBL_NEXT_ID(id, max) \ @@ -20,11 +23,41 @@ ((_id) == (max) ? 0 : (_id) + 1); \ }) +#define NBL_MAX_FUNC (520) +#define NBL_MAX_MTU_NUM 15 + enum nbl_product_type { NBL_LEONIS_TYPE, NBL_PRODUCT_MAX, }; +enum nbl_fix_cap_type { + NBL_TASK_FW_HB_CAP, + NBL_TASK_FW_RESET_CAP, + NBL_TASK_CLEAN_ADMINDQ_CAP, + NBL_TASK_CLEAN_MAILBOX_CAP, + NBL_RESTOOL_CAP, + NBL_TASK_ADAPT_DESC_GOTHER, + NBL_PROCESS_FLR_CAP, + NBL_RECOVERY_ABN_STATUS, + NBL_TASK_KEEP_ALIVE, + NBL_TASK_RESET_CAP, + NBL_TASK_RESET_CTRL_CAP, + NBL_NEED_DESTROY_CHIP, + NBL_FIX_CAP_NBITS +}; + +enum nbl_sfp_module_state { + NBL_SFP_MODULE_OFF, + NBL_SFP_MODULE_ON, +}; + +enum { + NBL_VSI_DATA = 0,/* default vsi in kernel or independent dpdk */ + NBL_VSI_CTRL, + NBL_VSI_MAX, +}; + enum nbl_hw_status { NBL_HW_NOMAL, /* Most hw module is not work nomal exclude pcie/emp */ @@ -102,6 +135,78 @@ struct nbl_queue_cfg_param { u16 half_offload_en; }; +struct nbl_queue_stats { + u64 packets; + u64 bytes; + u64 descs; +}; + +struct nbl_tx_queue_stats { + u64 tso_packets; + u64 tso_bytes; + u64 tx_csum_packets; + u64 tx_busy; + u64 tx_dma_busy; + u64 tx_multicast_packets; + u64 tx_unicast_packets; + u64 tx_skb_free; + u64 tx_desc_addr_err_cnt; + u64 tx_desc_len_err_cnt; +}; + +struct nbl_rx_queue_stats { + u64 rx_csum_packets; + u64 rx_csum_errors; + u64 rx_multicast_packets; + u64 rx_unicast_packets; + u64 rx_desc_addr_err_cnt; + u64 rx_alloc_buf_err_cnt; + u64 rx_cache_reuse; + u64 rx_cache_full; + u64 rx_cache_empty; + u64 rx_cache_busy; + u64 rx_cache_waive; +}; + +struct nbl_stats { + /* for toe stats */ + u64 tso_packets; + u64 tso_bytes; + u64 tx_csum_packets; + u64 rx_csum_packets; + u64 rx_csum_errors; + u64 tx_busy; + u64 tx_dma_busy; + u64 tx_multicast_packets; + u64 tx_unicast_packets; + u64 rx_multicast_packets; + u64 rx_unicast_packets; + u64 tx_skb_free; + u64 tx_desc_addr_err_cnt; + u64 tx_desc_len_err_cnt; + u64 rx_desc_addr_err_cnt; + u64 rx_alloc_buf_err_cnt; + u64 rx_cache_reuse; + u64 rx_cache_full; + u64 rx_cache_empty; + u64 rx_cache_busy; + u64 rx_cache_waive; + u64 tx_packets; + u64 tx_bytes; + u64 rx_packets; + u64 rx_bytes; +}; + +struct nbl_ustore_stats { + u64 rx_drop_packets; + u64 rx_trun_packets; +}; + +struct nbl_hw_stats { + u64 *total_uvn_stat_pkt_drop; + struct nbl_ustore_stats start_ustore_stats; +}; + enum nbl_fw_port_speed { NBL_FW_PORT_SPEED_10G, NBL_FW_PORT_SPEED_25G, @@ -109,8 +214,92 @@ enum nbl_fw_port_speed { NBL_FW_PORT_SPEED_100G, }; +#define PASSTHROUGH_FW_CMD_DATA_LEN (3072) +struct nbl_passthrough_fw_cmd { + u16 opcode; + u16 errcode; + u16 in_size; + u16 out_size; + u8 data[PASSTHROUGH_FW_CMD_DATA_LEN]; +}; + +#define NBL_NET_RING_NUM_CMD_LEN (520) +struct nbl_cmd_net_ring_num { + u16 pf_def_max_net_qp_num; + u16 vf_def_max_net_qp_num; + u16 net_max_qp_num[NBL_NET_RING_NUM_CMD_LEN]; +}; + +enum { + NBL_NETIF_F_SG_BIT, /* Scatter/gather IO. */ + NBL_NETIF_F_IP_CSUM_BIT, /* csum TCP/UDP over IPv4 */ + NBL_NETIF_F_HW_CSUM_BIT, /* csum all the packets. */ + NBL_NETIF_F_IPV6_CSUM_BIT, /* csum TCP/UDP over IPV6 */ + NBL_NETIF_F_HIGHDMA_BIT, /* DMA to high memory. */ + NBL_NETIF_F_HW_VLAN_CTAG_TX_BIT, /* Tx VLAN CTAG HW accel */ + NBL_NETIF_F_HW_VLAN_CTAG_RX_BIT, /* Rx VLAN CTAG HW accel */ + NBL_NETIF_F_HW_VLAN_CTAG_FILTER_BIT, /* Rx filtering on VLAN CTAG */ + NBL_NETIF_F_TSO_BIT, /* TCPv4 segmentation */ + NBL_NETIF_F_GSO_ROBUST_BIT, /* SKB_GSO_DODGY */ + NBL_NETIF_F_TSO6_BIT, /* TCPv6 segmentation */ + NBL_NETIF_F_GSO_GRE_BIT, /* GRE with TSO */ + NBL_NETIF_F_GSO_GRE_CSUM_BIT, /* GRE with csum with TSO */ + NBL_NETIF_F_GSO_UDP_TUNNEL_BIT, /* UDP TUNNEL with TSO */ + NBL_NETIF_F_GSO_UDP_TUNNEL_CSUM_BIT, /* UDP TUNNEL with TSO & CSUM */ + NBL_NETIF_F_GSO_PARTIAL_BIT, /* Only segment inner-most L4 + * in hardware and all other + * headers in software. + */ + NBL_NETIF_F_GSO_UDP_L4_BIT, /* UDP payload GSO (not UFO) */ + NBL_NETIF_F_SCTP_CRC_BIT, /* SCTP checksum offload */ + NBL_NETIF_F_NTUPLE_BIT, /* N-tuple filters supported */ + NBL_NETIF_F_RXHASH_BIT, /* Rx hashing offload */ + NBL_NETIF_F_RXCSUM_BIT, /* Rx checksumming offload */ + NBL_NETIF_F_HW_VLAN_STAG_TX_BIT, /* Tx VLAN STAG HW accel */ + NBL_NETIF_F_HW_VLAN_STAG_RX_BIT, /* Rx VLAN STAG HW accel */ + NBL_NETIF_F_HW_VLAN_STAG_FILTER_BIT, /* Rx filtering on VLAN STAG */ + NBL_NETIF_F_HW_TC_BIT, /* Offload TC infrastructure */ + NBL_FEATURES_COUNT +}; + +#define NBL_FEATURE(name) (1 << (NBL_##name##_BIT)) + +enum nbl_abnormal_event_module { + NBL_ABNORMAL_EVENT_DVN = 0, + NBL_ABNORMAL_EVENT_UVN, + NBL_ABNORMAL_EVENT_MAX, +}; + +struct nbl_abnormal_details { + bool abnormal; + u16 qid; + u16 vsi_id; +}; + +struct nbl_abnormal_event_info { + struct nbl_abnormal_details details[NBL_ABNORMAL_EVENT_MAX]; + u32 other_abnormal_info; +}; + enum nbl_performance_mode { NBL_QUIRKS_NO_TOE, NBL_QUIRKS_UVN_PREFETCH_ALIGN, }; + +struct nbl_ring_param { + u16 tx_ring_num; + u16 rx_ring_num; + u16 queue_size; +}; + +struct nbl_mtu_entry { + u32 ref_count; + u16 mtu_value; +}; + +struct nbl_napi_struct { + struct napi_struct napi; + atomic_t is_irq; +}; + #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index 3276dd2936ae..9cee11498e9f 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -11,8 +11,8 @@ static struct nbl_product_base_ops nbl_product_base_ops[NBL_PRODUCT_MAX] = { { .hw_init = nbl_hw_init_leonis, .hw_remove = nbl_hw_remove_leonis, - .res_init = NULL, - .res_remove = NULL, + .res_init = nbl_res_init_leonis, + .res_remove = nbl_res_remove_leonis, .chan_init = nbl_chan_init_common, .chan_remove = nbl_chan_remove_common, }, @@ -72,7 +72,13 @@ struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, ret = product_base_ops->chan_init(adapter, param); if (ret) goto chan_init_fail; + + ret = product_base_ops->res_init(adapter, param); + if (ret) + goto res_init_fail; return adapter; +res_init_fail: + product_base_ops->chan_remove(adapter); chan_init_fail: product_base_ops->hw_remove(adapter); hw_init_fail: @@ -87,6 +93,7 @@ void nbl_core_remove(struct nbl_adapter *adapter) dev = NBL_ADAP_TO_DEV(adapter); product_base_ops = NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter); + product_base_ops->res_remove(adapter); product_base_ops->chan_remove(adapter); product_base_ops->hw_remove(adapter); devm_kfree(dev, adapter); -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 07/15] net/nebula-matrix: add intr resource definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (5 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 06/15] net/nebula-matrix: add resource " illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 08/15] net/nebula-matrix: add vsi, queue, adminq " illusion.wang ` (8 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list intr interfaces offer comprehensive interrupt lifecycle management functions (including configuration, enabling, disabling, and destruction), status query capabilities,as well as performance tuning features (such as suppression levels). Additionally, they provide differentiated handling for PF and VF, making them suitable for high-performance networking devices (e.g., in SR-IOV scenarios). Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 1 + .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 204 ++++++++ .../nbl_hw_leonis/nbl_resource_leonis.c | 18 + .../nebula-matrix/nbl/nbl_hw/nbl_interrupt.c | 448 ++++++++++++++++++ .../nebula-matrix/nbl/nbl_hw/nbl_interrupt.h | 13 + .../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 4 + .../nbl/nbl_include/nbl_def_hw.h | 17 + .../nbl/nbl_include/nbl_include.h | 6 + 8 files changed, 711 insertions(+) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index 977544cd1b95..9c20af47313e 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -10,6 +10,7 @@ nbl_core-objs += nbl_common/nbl_common.o \ nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \ nbl_hw/nbl_resource.o \ + nbl_hw/nbl_interrupt.o \ nbl_hw/nbl_adminq.o \ nbl_main.o diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c index 57cae6baaafd..cc792497d01f 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c @@ -19,6 +19,164 @@ static u32 nbl_hw_get_quirks(void *priv) return quirks; } +static void nbl_hw_enable_mailbox_irq(void *priv, u16 func_id, bool enable_msix, + u16 global_vec_id) +{ + struct nbl_mailbox_qinfo_map_table mb_qinfo_map = { 0 }; + + nbl_hw_rd_regs(priv, NBL_MAILBOX_QINFO_MAP_REG_ARR(func_id), + (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map)); + + if (enable_msix) { + mb_qinfo_map.msix_idx = global_vec_id; + mb_qinfo_map.msix_idx_valid = 1; + } else { + mb_qinfo_map.msix_idx = 0; + mb_qinfo_map.msix_idx_valid = 0; + } + + nbl_hw_wr_regs(priv, NBL_MAILBOX_QINFO_MAP_REG_ARR(func_id), + (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map)); +} + +static void nbl_abnormal_intr_init(struct nbl_hw_mgt *hw_mgt) +{ + struct nbl_fem_int_mask fem_mask = { 0 }; + struct nbl_epro_int_mask epro_mask = { 0 }; + u32 top_ctrl_mask = 0xFFFFFFFF; + + /* Mask and clear fem cfg_err */ + nbl_hw_rd_regs(hw_mgt, NBL_FEM_INT_MASK, (u8 *)&fem_mask, + sizeof(fem_mask)); + fem_mask.cfg_err = 1; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_INT_MASK, (u8 *)&fem_mask, + sizeof(fem_mask)); + + memset(&fem_mask, 0, sizeof(fem_mask)); + fem_mask.cfg_err = 1; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_INT_STATUS, (u8 *)&fem_mask, + sizeof(fem_mask)); + + nbl_hw_rd_regs(hw_mgt, NBL_FEM_INT_MASK, (u8 *)&fem_mask, + sizeof(fem_mask)); + + /* Mask and clear epro cfg_err */ + nbl_hw_rd_regs(hw_mgt, NBL_EPRO_INT_MASK, (u8 *)&epro_mask, + sizeof(epro_mask)); + epro_mask.cfg_err = 1; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_INT_MASK, (u8 *)&epro_mask, + sizeof(epro_mask)); + + memset(&epro_mask, 0, sizeof(epro_mask)); + epro_mask.cfg_err = 1; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_INT_STATUS, (u8 *)&epro_mask, + sizeof(epro_mask)); + + /* Mask and clear all top_tcrl abnormal intrs. + */ + nbl_hw_wr_regs(hw_mgt, NBL_TOP_CTRL_INT_MASK, (u8 *)&top_ctrl_mask, + sizeof(top_ctrl_mask)); + + nbl_hw_wr_regs(hw_mgt, NBL_TOP_CTRL_INT_STATUS, (u8 *)&top_ctrl_mask, + sizeof(top_ctrl_mask)); +} + +static void nbl_hw_enable_abnormal_irq(void *priv, bool enable_msix, + u16 global_vec_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_abnormal_msix_vector abnormal_msix_vetcor = { 0 }; + u32 abnormal_timeout = 0x927C0; /* 600000, 1ms */ + u32 quirks; + + if (enable_msix) { + abnormal_msix_vetcor.idx = global_vec_id; + abnormal_msix_vetcor.vld = 1; + } + + quirks = nbl_hw_get_quirks(hw_mgt); + + if (!(quirks & BIT(NBL_QUIRKS_NO_TOE))) + abnormal_timeout = 0x3938700; /* 1s */ + + nbl_hw_wr_regs(hw_mgt, NBL_PADPT_ABNORMAL_TIMEOUT, + (u8 *)&abnormal_timeout, sizeof(abnormal_timeout)); + + nbl_hw_wr_regs(hw_mgt, NBL_PADPT_ABNORMAL_MSIX_VEC, + (u8 *)&abnormal_msix_vetcor, + sizeof(abnormal_msix_vetcor)); + + nbl_abnormal_intr_init(hw_mgt); +} + +static void nbl_hw_enable_msix_irq(void *priv, u16 global_vec_id) +{ + struct nbl_msix_notify msix_notify = { 0 }; + + msix_notify.glb_msix_idx = global_vec_id; + + nbl_hw_wr_regs(priv, NBL_PCOMPLETER_MSIX_NOTIRY_OFFSET, + (u8 *)&msix_notify, sizeof(msix_notify)); +} + +static u8 __iomem * +nbl_hw_get_msix_irq_enable_info(void *priv, u16 global_vec_id, u32 *irq_data) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_msix_notify msix_notify = { 0 }; + + msix_notify.glb_msix_idx = global_vec_id; + memcpy(irq_data, &msix_notify, sizeof(msix_notify)); + + return (hw_mgt->hw_addr + NBL_PCOMPLETER_MSIX_NOTIRY_OFFSET); +} + +static void nbl_hw_configure_msix_map(void *priv, u16 func_id, bool valid, + dma_addr_t dma_addr, u8 bus, u8 devid, + u8 function) +{ + struct nbl_function_msix_map function_msix_map = { 0 }; + + if (valid) { + function_msix_map.msix_map_base_addr = dma_addr; + /* use af's bdf, because dma memmory is alloc by af */ + function_msix_map.function = function; + function_msix_map.devid = devid; + function_msix_map.bus = bus; + function_msix_map.valid = 1; + } + + nbl_hw_wr_regs(priv, NBL_PCOMPLETER_FUNCTION_MSIX_MAP_REG_ARR(func_id), + (u8 *)&function_msix_map, sizeof(function_msix_map)); +} + +static void nbl_hw_configure_msix_info(void *priv, u16 func_id, bool valid, + u16 interrupt_id, u8 bus, u8 devid, + u8 function, bool msix_mask_en) +{ + struct nbl_pcompleter_host_msix_fid_table host_msix_fid_table = { 0 }; + struct nbl_host_msix_info msix_info = { 0 }; + + if (valid) { + host_msix_fid_table.vld = 1; + host_msix_fid_table.fid = func_id; + + msix_info.intrl_pnum = 0; + msix_info.intrl_rate = 0; + msix_info.function = function; + msix_info.devid = devid; + msix_info.bus = bus; + msix_info.valid = 1; + if (msix_mask_en) + msix_info.msix_mask_en = 1; + } + + nbl_hw_wr_regs(priv, NBL_PADPT_HOST_MSIX_INFO_REG_ARR(interrupt_id), + (u8 *)&msix_info, sizeof(msix_info)); + nbl_hw_wr_regs(priv, NBL_PCOMPLETER_HOST_MSIX_FID_TABLE(interrupt_id), + (u8 *)&host_msix_fid_table, sizeof(host_msix_fid_table)); +} + static void nbl_hw_update_mailbox_queue_tail_ptr(void *priv, u16 tail_ptr, u8 txrx) { @@ -203,6 +361,20 @@ static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus, (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map)); } +static void nbl_hw_set_coalesce(void *priv, u16 interrupt_id, u16 pnum, + u16 rate) +{ + struct nbl_host_msix_info msix_info = { 0 }; + + nbl_hw_rd_regs(priv, NBL_PADPT_HOST_MSIX_INFO_REG_ARR(interrupt_id), + (u8 *)&msix_info, sizeof(msix_info)); + + msix_info.intrl_pnum = pnum; + msix_info.intrl_rate = rate; + nbl_hw_wr_regs(priv, NBL_PADPT_HOST_MSIX_INFO_REG_ARR(interrupt_id), + (u8 *)&msix_info, sizeof(msix_info)); +} + static void nbl_hw_config_adminq_rxq(void *priv, dma_addr_t dma_addr, int size_bwid) { @@ -277,6 +449,30 @@ static void nbl_hw_cfg_adminq_qinfo(void *priv, u16 bus, u16 devid, sizeof(adminq_qinfo_map)); } +static void nbl_hw_enable_adminq_irq(void *priv, bool enable_msix, + u16 global_vec_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct nbl_adminq_qinfo_map_table adminq_qinfo_map = { 0 }; + + adminq_qinfo_map.bus = common->hw_bus; + adminq_qinfo_map.devid = common->devid; + adminq_qinfo_map.function = NBL_COMMON_TO_PCI_FUNC_ID(common); + + if (enable_msix) { + adminq_qinfo_map.msix_idx = global_vec_id; + adminq_qinfo_map.msix_idx_valid = 1; + } else { + adminq_qinfo_map.msix_idx = 0; + adminq_qinfo_map.msix_idx_valid = 0; + } + + nbl_hw_write_mbx_regs(priv, NBL_ADMINQ_MSIX_MAP_TABLE_ADDR, + (u8 *)&adminq_qinfo_map, + sizeof(adminq_qinfo_map)); +} + static void nbl_hw_update_adminq_queue_tail_ptr(void *priv, u16 tail_ptr, u8 txrx) { @@ -551,6 +747,9 @@ static enum nbl_hw_status nbl_hw_get_hw_status(void *priv) }; static struct nbl_hw_ops hw_ops = { + .configure_msix_map = nbl_hw_configure_msix_map, + .configure_msix_info = nbl_hw_configure_msix_info, + .set_coalesce = nbl_hw_set_coalesce, .update_mailbox_queue_tail_ptr = nbl_hw_update_mailbox_queue_tail_ptr, .config_mailbox_rxq = nbl_hw_config_mailbox_rxq, .config_mailbox_txq = nbl_hw_config_mailbox_txq, @@ -564,12 +763,17 @@ static struct nbl_hw_ops hw_ops = { .get_pf_bar_addr = nbl_hw_get_pf_bar_addr, .get_vf_bar_addr = nbl_hw_get_vf_bar_addr, .cfg_mailbox_qinfo = nbl_hw_cfg_mailbox_qinfo, + .enable_mailbox_irq = nbl_hw_enable_mailbox_irq, + .enable_abnormal_irq = nbl_hw_enable_abnormal_irq, + .enable_msix_irq = nbl_hw_enable_msix_irq, + .get_msix_irq_enable_info = nbl_hw_get_msix_irq_enable_info, .config_adminq_rxq = nbl_hw_config_adminq_rxq, .config_adminq_txq = nbl_hw_config_adminq_txq, .stop_adminq_rxq = nbl_hw_stop_adminq_rxq, .stop_adminq_txq = nbl_hw_stop_adminq_txq, .cfg_adminq_qinfo = nbl_hw_cfg_adminq_qinfo, + .enable_adminq_irq = nbl_hw_enable_adminq_irq, .update_adminq_queue_tail_ptr = nbl_hw_update_adminq_queue_tail_ptr, .check_adminq_dma_err = nbl_hw_check_adminq_dma_err, diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c index ea5c83b1ab76..b4c6de135a26 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c @@ -490,6 +490,7 @@ static struct nbl_resource_ops res_ops = { static struct nbl_res_product_ops product_ops = { }; +static bool is_ops_inited; static int nbl_res_setup_res_mgt(struct nbl_common_info *common, struct nbl_resource_mgt_leonis **res_mgt_leonis) @@ -537,15 +538,28 @@ static int nbl_res_setup_ops(struct device *dev, struct nbl_resource_ops_tbl **res_ops_tbl, struct nbl_resource_mgt_leonis *res_mgt_leonis) { + int ret = 0; + *res_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_resource_ops_tbl), GFP_KERNEL); if (!*res_ops_tbl) return -ENOMEM; + if (!is_ops_inited) { + ret = nbl_intr_setup_ops(&res_ops); + if (ret) + goto setup_fail; + is_ops_inited = true; + } + (*res_ops_tbl)->ops = &res_ops; (*res_ops_tbl)->priv = res_mgt_leonis; return 0; + +setup_fail: + nbl_res_remove_ops(dev, res_ops_tbl); + return -EAGAIN; } static int nbl_res_ctrl_dev_setup_eth_info(struct nbl_resource_mgt *res_mgt) @@ -851,6 +865,7 @@ static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis) { struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt; + nbl_intr_mgt_stop(res_mgt); nbl_res_ctrl_dev_ustore_stats_remove(res_mgt); nbl_res_ctrl_dev_remove_vsi_info(res_mgt); nbl_res_ctrl_dev_remove_eth_info(res_mgt); @@ -903,6 +918,9 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis, if (ret) goto start_fail; + ret = nbl_intr_mgt_start(res_mgt); + if (ret) + goto start_fail; nbl_res_set_fix_capability(res_mgt, NBL_TASK_FW_HB_CAP); nbl_res_set_fix_capability(res_mgt, NBL_TASK_FW_RESET_CAP); nbl_res_set_fix_capability(res_mgt, NBL_TASK_CLEAN_ADMINDQ_CAP); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c new file mode 100644 index 000000000000..176478bcb414 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.c @@ -0,0 +1,448 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include "nbl_interrupt.h" + +static int nbl_res_intr_destroy_msix_map(void *priv, u16 func_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct device *dma_dev; + struct nbl_hw_ops *hw_ops; + struct nbl_interrupt_mgt *intr_mgt; + struct nbl_msix_map_table *msix_map_table; + u16 *interrupts; + u16 intr_num; + u16 i; + + if (!res_mgt) + return -EINVAL; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt); + dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt); + + /* use ctrl dev bdf */ + hw_ops->configure_msix_map(NBL_RES_MGT_TO_HW_PRIV(res_mgt), func_id, + false, 0, 0, 0, 0); + + intr_num = intr_mgt->func_intr_res[func_id].num_interrupts; + interrupts = intr_mgt->func_intr_res[func_id].interrupts; + + WARN_ON(!interrupts); + for (i = 0; i < intr_num; i++) { + if (interrupts[i] >= NBL_MAX_OTHER_INTERRUPT) + clear_bit(interrupts[i] - NBL_MAX_OTHER_INTERRUPT, + intr_mgt->interrupt_net_bitmap); + else + clear_bit(interrupts[i], + intr_mgt->interrupt_others_bitmap); + + hw_ops->configure_msix_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + func_id, false, interrupts[i], 0, 0, + 0, false); + } + + kfree(interrupts); + intr_mgt->func_intr_res[func_id].interrupts = NULL; + intr_mgt->func_intr_res[func_id].num_interrupts = 0; + + msix_map_table = &intr_mgt->func_intr_res[func_id].msix_map_table; + dma_free_coherent(dma_dev, msix_map_table->size, + msix_map_table->base_addr, msix_map_table->dma); + msix_map_table->size = 0; + msix_map_table->base_addr = NULL; + msix_map_table->dma = 0; + + return 0; +} + +static int nbl_res_intr_configure_msix_map(void *priv, u16 func_id, + u16 num_net_msix, + u16 num_others_msix, + bool net_msix_mask_en) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct device *dma_dev; + struct nbl_hw_ops *hw_ops; + struct nbl_interrupt_mgt *intr_mgt; + struct nbl_common_info *common; + struct nbl_msix_map_table *msix_map_table; + struct nbl_msix_map *msix_map_entries; + u16 *interrupts; + u16 requested; + u16 intr_index; + u16 i; + u8 bus, devid, function; + bool msix_mask_en; + int ret = 0; + + if (!res_mgt) + return -EINVAL; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt); + dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt); + common = NBL_RES_MGT_TO_COMMON(res_mgt); + + if (intr_mgt->func_intr_res[func_id].interrupts) + nbl_res_intr_destroy_msix_map(priv, func_id); + + nbl_res_func_id_to_bdf(res_mgt, func_id, &bus, &devid, &function); + + msix_map_table = &intr_mgt->func_intr_res[func_id].msix_map_table; + WARN_ON(msix_map_table->base_addr); + msix_map_table->size = + sizeof(struct nbl_msix_map) * NBL_MSIX_MAP_TABLE_MAX_ENTRIES; + msix_map_table->base_addr = dma_alloc_coherent(dma_dev, + msix_map_table->size, + &msix_map_table->dma, + GFP_ATOMIC | __GFP_ZERO); + if (!msix_map_table->base_addr) { + pr_err("Allocate DMA memory for function msix map table failed\n"); + msix_map_table->size = 0; + return -ENOMEM; + } + + requested = num_net_msix + num_others_msix; + interrupts = kcalloc(requested, sizeof(interrupts[0]), GFP_ATOMIC); + if (!interrupts) { + pr_err("Allocate function interrupts array failed\n"); + ret = -ENOMEM; + goto alloc_interrupts_err; + } + + intr_mgt->func_intr_res[func_id].interrupts = interrupts; + intr_mgt->func_intr_res[func_id].num_interrupts = requested; + intr_mgt->func_intr_res[func_id].num_net_interrupts = num_net_msix; + + for (i = 0; i < num_net_msix; i++) { + intr_index = find_first_zero_bit(intr_mgt->interrupt_net_bitmap, + NBL_MAX_NET_INTERRUPT); + if (intr_index == NBL_MAX_NET_INTERRUPT) { + pr_err("There is no available interrupt left\n"); + ret = -EAGAIN; + goto get_interrupt_err; + } + interrupts[i] = intr_index + NBL_MAX_OTHER_INTERRUPT; + set_bit(intr_index, intr_mgt->interrupt_net_bitmap); + } + + for (i = num_net_msix; i < requested; i++) { + intr_index = + find_first_zero_bit(intr_mgt->interrupt_others_bitmap, + NBL_MAX_OTHER_INTERRUPT); + if (intr_index == NBL_MAX_OTHER_INTERRUPT) { + pr_err("There is no available interrupt left\n"); + ret = -EAGAIN; + goto get_interrupt_err; + } + interrupts[i] = intr_index; + set_bit(intr_index, intr_mgt->interrupt_others_bitmap); + } + + msix_map_entries = msix_map_table->base_addr; + for (i = 0; i < requested; i++) { + msix_map_entries[i].global_msix_index = interrupts[i]; + msix_map_entries[i].valid = 1; + + if (i < num_net_msix && net_msix_mask_en) + msix_mask_en = 1; + else + msix_mask_en = 0; + hw_ops->configure_msix_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + func_id, true, interrupts[i], bus, + devid, function, msix_mask_en); + if (i < num_net_msix) + hw_ops->set_coalesce(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + interrupts[i], 0, 0); + } + + /* use ctrl dev bdf */ + hw_ops->configure_msix_map(NBL_RES_MGT_TO_HW_PRIV(res_mgt), func_id, + true, msix_map_table->dma, common->hw_bus, + common->devid, + NBL_COMMON_TO_PCI_FUNC_ID(common)); + + return 0; + +get_interrupt_err: + while (i--) { + intr_index = interrupts[i]; + if (intr_index >= NBL_MAX_OTHER_INTERRUPT) + clear_bit(intr_index - NBL_MAX_OTHER_INTERRUPT, + intr_mgt->interrupt_net_bitmap); + else + clear_bit(intr_index, + intr_mgt->interrupt_others_bitmap); + } + kfree(interrupts); + intr_mgt->func_intr_res[func_id].num_interrupts = 0; + intr_mgt->func_intr_res[func_id].interrupts = NULL; + +alloc_interrupts_err: + dma_free_coherent(dma_dev, msix_map_table->size, + msix_map_table->base_addr, msix_map_table->dma); + msix_map_table->size = 0; + msix_map_table->base_addr = NULL; + msix_map_table->dma = 0; + + return ret; +} + +static int nbl_res_init_vf_msix_map(void *priv, u16 func_id, bool enable) +{ +#define NBL_VF_NET_MSIX_NUM (4) + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 net_msix_num = NBL_VF_NET_MSIX_NUM; + u16 tx_queue_num = 0; + u16 rx_queue_num = 0; + + if (enable) { + if (res_mgt->common_ops.get_queue_num) { + res_mgt->common_ops.get_queue_num(priv, func_id, + &tx_queue_num, + &rx_queue_num); + net_msix_num = tx_queue_num + rx_queue_num; + } + + return nbl_res_intr_configure_msix_map(priv, func_id, + net_msix_num, 1, true); + } + + nbl_res_intr_destroy_msix_map(priv, func_id); + + return 0; +} + +static int nbl_res_intr_destroy_msix_map_export(void *priv, u16 func_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + int ret = 0; + + ret = nbl_res_intr_destroy_msix_map(priv, func_id); + + if (func_id >= NBL_RES_MGT_TO_PF_NUM(res_mgt)) + ret |= nbl_res_init_vf_msix_map(priv, func_id, true); + + return ret; +} + +static int nbl_res_intr_enable_mailbox_irq(void *priv, u16 func_id, + u16 vector_id, bool enable_msix) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops; + struct nbl_interrupt_mgt *intr_mgt; + u16 global_vec_id; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt); + + global_vec_id = intr_mgt->func_intr_res[func_id].interrupts[vector_id]; + hw_ops->enable_mailbox_irq(NBL_RES_MGT_TO_HW_PRIV(res_mgt), func_id, + enable_msix, global_vec_id); + + return 0; +} + +static int nbl_res_intr_enable_abnormal_irq(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops; + struct nbl_interrupt_mgt *intr_mgt; + u16 global_vec_id; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt); + + global_vec_id = intr_mgt->func_intr_res[0].interrupts[vector_id]; + hw_ops->enable_abnormal_irq(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + enable_msix, global_vec_id); + return 0; +} + +static u8 __iomem *nbl_res_get_msix_irq_enable_info(void *priv, + u16 global_vec_id, + u32 *irq_data) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + return hw_ops->get_msix_irq_enable_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_vec_id, irq_data); +} + +static u16 nbl_res_intr_get_global_vector(void *priv, u16 vsi_id, + u16 local_vec_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_interrupt_mgt *intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt); + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + + return intr_mgt->func_intr_res[func_id].interrupts[local_vec_id]; +} + +static u16 nbl_res_intr_get_msix_entry_id(void *priv, u16 vsi_id, + u16 local_vec_id) +{ + return local_vec_id; +} + +static int nbl_res_intr_enable_adminq_irq(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops; + struct nbl_interrupt_mgt *intr_mgt; + u16 global_vec_id; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt); + + global_vec_id = intr_mgt->func_intr_res[0].interrupts[vector_id]; + hw_ops->enable_adminq_irq(NBL_RES_MGT_TO_HW_PRIV(res_mgt), enable_msix, + global_vec_id); + return 0; +} + +static int nbl_res_intr_get_mbx_irq_num(void *priv) +{ + return 1; +} + +static int nbl_res_intr_get_adminq_irq_num(void *priv) +{ + return 1; +} + +static int nbl_res_intr_get_abnormal_irq_num(void *priv) +{ + return 1; +} + +static void nbl_res_flr_clear_interrupt(void *priv, u16 vf_id) +{ +} + +static void nbl_res_intr_unmask(struct nbl_resource_mgt *res_mgt, + u16 interrupts_id) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + hw_ops->enable_msix_irq(NBL_RES_MGT_TO_HW_PRIV(res_mgt), interrupts_id); +} + +static void nbl_res_unmask_all_interrupts(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_interrupt_mgt *intr_mgt = NBL_RES_MGT_TO_INTR_MGT(res_mgt); + int i, j; + + for (i = 0; i < NBL_MAX_PF; i++) { + if (intr_mgt->func_intr_res[i].interrupts) { + for (j = 0; + j < intr_mgt->func_intr_res[i].num_interrupts; j++) + nbl_res_intr_unmask(res_mgt, + intr_mgt->func_intr_res[i] + .interrupts[j]); + } + } +} + +/* NBL_INTR_SET_OPS(ops_name, func) + * + * Use X Macros to reduce setup and remove codes. + */ +#define NBL_INTR_OPS_TBL \ +do { \ + NBL_INTR_SET_OPS(init_vf_msix_map, nbl_res_init_vf_msix_map); \ + NBL_INTR_SET_OPS(configure_msix_map, \ + nbl_res_intr_configure_msix_map); \ + NBL_INTR_SET_OPS(destroy_msix_map, \ + nbl_res_intr_destroy_msix_map_export); \ + NBL_INTR_SET_OPS(enable_mailbox_irq, \ + nbl_res_intr_enable_mailbox_irq); \ + NBL_INTR_SET_OPS(enable_abnormal_irq, \ + nbl_res_intr_enable_abnormal_irq); \ + NBL_INTR_SET_OPS(enable_adminq_irq, \ + nbl_res_intr_enable_adminq_irq); \ + NBL_INTR_SET_OPS(get_msix_irq_enable_info, \ + nbl_res_get_msix_irq_enable_info); \ + NBL_INTR_SET_OPS(get_global_vector, \ + nbl_res_intr_get_global_vector); \ + NBL_INTR_SET_OPS(get_msix_entry_id, \ + nbl_res_intr_get_msix_entry_id); \ + NBL_INTR_SET_OPS(get_mbx_irq_num, \ + nbl_res_intr_get_mbx_irq_num); \ + NBL_INTR_SET_OPS(get_adminq_irq_num, \ + nbl_res_intr_get_adminq_irq_num); \ + NBL_INTR_SET_OPS(get_abnormal_irq_num, \ + nbl_res_intr_get_abnormal_irq_num); \ + NBL_INTR_SET_OPS(flr_clear_interrupt, \ + nbl_res_flr_clear_interrupt); \ + NBL_INTR_SET_OPS(unmask_all_interrupts, \ + nbl_res_unmask_all_interrupts); \ +} while (0) + +/* Structure starts here, adding an op should not modify anything below */ +static int nbl_intr_setup_mgt(struct device *dev, + struct nbl_interrupt_mgt **intr_mgt) +{ + *intr_mgt = + devm_kzalloc(dev, sizeof(struct nbl_interrupt_mgt), GFP_KERNEL); + if (!*intr_mgt) + return -ENOMEM; + + return 0; +} + +static void nbl_intr_remove_mgt(struct device *dev, + struct nbl_interrupt_mgt **intr_mgt) +{ + devm_kfree(dev, *intr_mgt); + *intr_mgt = NULL; +} + +int nbl_intr_mgt_start(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_interrupt_mgt **intr_mgt; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + intr_mgt = &NBL_RES_MGT_TO_INTR_MGT(res_mgt); + + return nbl_intr_setup_mgt(dev, intr_mgt); +} + +void nbl_intr_mgt_stop(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_interrupt_mgt **intr_mgt; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + intr_mgt = &NBL_RES_MGT_TO_INTR_MGT(res_mgt); + + if (!(*intr_mgt)) + return; + + nbl_intr_remove_mgt(dev, intr_mgt); +} + +int nbl_intr_setup_ops(struct nbl_resource_ops *res_ops) +{ +#define NBL_INTR_SET_OPS(name, func) \ + do { \ + res_ops->NBL_NAME(name) = func; \ + ; \ + } while (0) + NBL_INTR_OPS_TBL; +#undef NBL_INTR_SET_OPS + + return 0; +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h new file mode 100644 index 000000000000..5448bcf36416 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_interrupt.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_INTERRUPT_H_ +#define _NBL_INTERRUPT_H_ + +#include "nbl_resource.h" + +#define NBL_MSIX_MAP_TABLE_MAX_ENTRIES (1024) +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h index e90d25e6bc20..5cbe0ebc4f89 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h @@ -845,6 +845,10 @@ u8 nbl_res_vsi_id_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id); int nbl_adminq_mgt_start(struct nbl_resource_mgt *res_mgt); void nbl_adminq_mgt_stop(struct nbl_resource_mgt *res_mgt); int nbl_adminq_setup_ops(struct nbl_resource_ops *resource_ops); + +int nbl_intr_mgt_start(struct nbl_resource_mgt *res_mgt); +void nbl_intr_mgt_stop(struct nbl_resource_mgt *res_mgt); +int nbl_intr_setup_ops(struct nbl_resource_ops *resource_ops); bool nbl_res_get_fix_capability(void *priv, enum nbl_fix_cap_type cap_type); void nbl_res_set_fix_capability(struct nbl_resource_mgt *res_mgt, enum nbl_fix_cap_type cap_type); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h index 243869883801..ee4194ab7252 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h @@ -10,6 +10,14 @@ #include "nbl_include.h" struct nbl_hw_ops { + void (*configure_msix_map)(void *priv, u16 func_id, bool valid, + dma_addr_t dma_addr, u8 bus, u8 devid, + u8 function); + void (*configure_msix_info)(void *priv, u16 func_id, bool valid, + u16 interrupt_id, u8 bus, u8 devid, + u8 function, bool net_msix_mask_en); + void (*set_coalesce)(void *priv, u16 interrupt_id, u16 pnum, u16 rate); + void (*update_mailbox_queue_tail_ptr)(void *priv, u16 tail_ptr, u8 txrx); void (*config_mailbox_rxq)(void *priv, dma_addr_t dma_addr, @@ -27,6 +35,13 @@ struct nbl_hw_ops { u64 (*get_vf_bar_addr)(void *priv, u16 func_id); void (*cfg_mailbox_qinfo)(void *priv, u16 func_id, u16 bus, u16 devid, u16 function); + void (*enable_mailbox_irq)(void *priv, u16 func_id, bool enable_msix, + u16 global_vec_id); + void (*enable_abnormal_irq)(void *priv, bool enable_msix, + u16 global_vec_id); + void (*enable_msix_irq)(void *priv, u16 global_vec_id); + u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id, + u32 *irq_data); void (*config_adminq_rxq)(void *priv, dma_addr_t dma_addr, int size_bwid); void (*config_adminq_txq)(void *priv, dma_addr_t dma_addr, @@ -34,6 +49,8 @@ struct nbl_hw_ops { void (*stop_adminq_rxq)(void *priv); void (*stop_adminq_txq)(void *priv); void (*cfg_adminq_qinfo)(void *priv, u16 bus, u16 devid, u16 function); + void (*enable_adminq_irq)(void *priv, bool enable_msix, + u16 global_vec_id); void (*update_adminq_queue_tail_ptr)(void *priv, u16 tail_ptr, u8 txrx); bool (*check_adminq_dma_err)(void *priv, bool tx); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index 8759ba3d478c..134704229116 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -17,6 +17,10 @@ #define NBL_MAX_PF 8 +#define NBL_RATE_MBPS_100G 100000 +#define NBL_RATE_MBPS_25G 25000 +#define NBL_RATE_MBPS_10G 10000 + #define NBL_NEXT_ID(id, max) \ ({ \ typeof(id) _id = (id); \ @@ -25,6 +29,8 @@ #define NBL_MAX_FUNC (520) #define NBL_MAX_MTU_NUM 15 +/* Used for macros to pass checkpatch */ +#define NBL_NAME(x) x enum nbl_product_type { NBL_LEONIS_TYPE, -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 08/15] net/nebula-matrix: add vsi, queue, adminq resource definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (6 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 07/15] net/nebula-matrix: add intr resource " illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 18:38 ` Andrew Lunn 2026-01-09 10:01 ` [PATCH v2 net-next 09/15] net/nebula-matrix: add flow " illusion.wang ` (7 subsequent siblings) 15 siblings, 1 reply; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list vsi resource management functions include: VSI basic operations (promiscuous mode) Hardware module initialization and de-initialization queue resource management functions include: queue init, queue deinit queue alooc, free queue rss cfg queue hw cfg queue qos and rate control queue desc gother Adminq resource management functions include: Hardware Configuration: Send configuration commands to the hardware via AdminQ (such as setting port properties, queue quantities, MAC addresses, etc.). State Monitoring: Obtain hardware status (such as link status, port properties, etc.). Firmware Management: Support firmware reading, writing, erasing, checksum verification, and activation. Event Notification: Handle hardware events (such as link status changes, module insertion and removal). Command Filtering: Conduct legality checks on commands sent to the hardware. Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 3 + .../nebula-matrix/nbl/nbl_hw/nbl_adminq.c | 1336 +++++++++++++ .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 1703 ++++++++++++++++- .../nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c | 1430 ++++++++++++++ .../nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h | 23 + .../nbl_hw_leonis/nbl_resource_leonis.c | 30 + .../nbl_hw_leonis/nbl_resource_leonis.h | 12 + .../nebula-matrix/nbl/nbl_hw/nbl_queue.c | 60 + .../nebula-matrix/nbl/nbl_hw/nbl_queue.h | 11 + .../nebula-matrix/nbl/nbl_hw/nbl_resource.c | 17 + .../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 10 + .../nebula-matrix/nbl/nbl_hw/nbl_vsi.c | 168 ++ .../nebula-matrix/nbl/nbl_hw/nbl_vsi.h | 12 + .../nbl/nbl_include/nbl_def_hw.h | 55 + .../nbl/nbl_include/nbl_include.h | 134 ++ 15 files changed, 4996 insertions(+), 8 deletions(-) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index 9c20af47313e..e611110ac369 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -7,10 +7,13 @@ obj-$(CONFIG_NBL_CORE) := nbl_core.o nbl_core-objs += nbl_common/nbl_common.o \ nbl_channel/nbl_channel.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \ + nbl_hw/nbl_hw_leonis/nbl_queue_leonis.o \ nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \ nbl_hw/nbl_resource.o \ nbl_hw/nbl_interrupt.o \ + nbl_hw/nbl_queue.o \ + nbl_hw/nbl_vsi.o \ nbl_hw/nbl_adminq.o \ nbl_main.o diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c index 2db160a92256..a56de810de79 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_adminq.c @@ -6,6 +6,273 @@ #include "nbl_adminq.h" +static int nbl_res_aq_update_ring_num(void *priv); + +/* **** FW CMD FILTERS START **** */ + +static int nbl_res_aq_chk_net_ring_num(struct nbl_resource_mgt *res_mgt, + struct nbl_cmd_net_ring_num *p) +{ + struct nbl_resource_info *res_info = NBL_RES_MGT_TO_RES_INFO(res_mgt); + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + u32 sum = 0, pf_real_num = 0, vf_real_num = 0; + int i, tmp; + + pf_real_num = NBL_VSI_PF_LEGAL_QUEUE_NUM(p->pf_def_max_net_qp_num); + vf_real_num = NBL_VSI_VF_REAL_QUEUE_NUM(p->vf_def_max_net_qp_num); + + if (pf_real_num > NBL_MAX_TXRX_QUEUE_PER_FUNC || + vf_real_num > NBL_MAX_TXRX_QUEUE_PER_FUNC) + return -EINVAL; + + for (i = 0; i < NBL_COMMON_TO_ETH_MODE(common); i++) { + pf_real_num = p->net_max_qp_num[i] ? + NBL_VSI_PF_LEGAL_QUEUE_NUM(p->net_max_qp_num[i]) : + NBL_VSI_PF_LEGAL_QUEUE_NUM(p->pf_def_max_net_qp_num); + + if (pf_real_num > NBL_MAX_TXRX_QUEUE_PER_FUNC) + return -EINVAL; + + pf_real_num = p->net_max_qp_num[i] ? + NBL_VSI_PF_MAX_QUEUE_NUM(p->net_max_qp_num[i]) : + NBL_VSI_PF_MAX_QUEUE_NUM(p->pf_def_max_net_qp_num); + if (pf_real_num > NBL_MAX_TXRX_QUEUE_PER_FUNC) + pf_real_num = NBL_MAX_TXRX_QUEUE_PER_FUNC; + + sum += pf_real_num; + } + + for (i = 0; i < res_info->max_vf_num; i++) { + tmp = i + NBL_MAX_PF; + vf_real_num = p->net_max_qp_num[tmp] ? + NBL_VSI_VF_REAL_QUEUE_NUM(p->net_max_qp_num[tmp]) : + NBL_VSI_VF_REAL_QUEUE_NUM(p->vf_def_max_net_qp_num); + + if (vf_real_num > NBL_MAX_TXRX_QUEUE_PER_FUNC) + return -EINVAL; + + sum += vf_real_num; + } + + if (sum > NBL_MAX_TXRX_QUEUE) + return -EINVAL; + + return 0; +} + +static u32 nbl_res_aq_sum_vf_num(struct nbl_cmd_vf_num *param) +{ + u32 count = 0; + int i; + + for (i = 0; i < NBL_VF_NUM_CMD_LEN; i++) + count += param->vf_max_num[i]; + + return count; +} + +static int nbl_res_aq_check_vf_num_type(struct nbl_resource_mgt *res_mgt, + struct nbl_cmd_vf_num *param) +{ + u32 count; + + count = nbl_res_aq_sum_vf_num(param); + if (count > NBL_MAX_VF) + return -EINVAL; + + return 0; +} + +static int nbl_res_fw_cmd_filter_rw_in(struct nbl_resource_mgt *res_mgt, + void *data, u16 len) +{ + struct nbl_chan_resource_write_param *param = + (struct nbl_chan_resource_write_param *)data; + struct nbl_cmd_net_ring_num *net_ring_num_param; + struct nbl_cmd_vf_num *vf_num_param; + + switch (param->resid) { + case NBL_ADMINQ_PFA_TLV_NET_RING_NUM: + net_ring_num_param = (struct nbl_cmd_net_ring_num *)param->data; + return nbl_res_aq_chk_net_ring_num(res_mgt, net_ring_num_param); + case NBL_ADMINQ_PFA_TLV_VF_NUM: + vf_num_param = (struct nbl_cmd_vf_num *)param->data; + return nbl_res_aq_check_vf_num_type(res_mgt, vf_num_param); + default: + break; + } + + return 0; +} + +static int nbl_res_fw_cmd_filter_rw_out(struct nbl_resource_mgt *res_mgt, + void *in, u16 in_len, void *out, + u16 out_len) +{ + struct nbl_resource_info *res_info = NBL_RES_MGT_TO_RES_INFO(res_mgt); + struct nbl_net_ring_num_info *num_info = &res_info->net_ring_num_info; + struct nbl_chan_resource_write_param *param = + (struct nbl_chan_resource_write_param *)in; + struct nbl_cmd_net_ring_num *net_ring_num_param; + struct nbl_cmd_vf_num *vf_num_param; + size_t copy_len; + u32 count; + + switch (param->resid) { + case NBL_ADMINQ_PFA_TLV_NET_RING_NUM: + net_ring_num_param = (struct nbl_cmd_net_ring_num *)param->data; + copy_len = min_t(size_t, sizeof(*num_info), (size_t)in_len); + memcpy(num_info, net_ring_num_param, copy_len); + break; + case NBL_ADMINQ_PFA_TLV_VF_NUM: + vf_num_param = (struct nbl_cmd_vf_num *)param->data; + count = nbl_res_aq_sum_vf_num(vf_num_param); + res_info->max_vf_num = count; + break; + default: + break; + } + + return 0; +} + +static void +nbl_res_aq_add_cmd_filter_res_write(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_adminq_mgt *adminq_mgt = NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_res_fw_cmd_filter filter = { + .in = nbl_res_fw_cmd_filter_rw_in, + .out = nbl_res_fw_cmd_filter_rw_out, + }; + u16 key = 0; + + key = NBL_CHAN_MSG_ADMINQ_RESOURCE_WRITE; + + if (nbl_common_alloc_hash_node(adminq_mgt->cmd_filter, &key, &filter, + NULL)) + nbl_warn(common, "Fail to register res_write in filter"); +} + +/* **** FW CMD FILTERS END **** */ + +static int nbl_res_aq_set_module_eeprom_info(struct nbl_resource_mgt *res_mgt, + u8 eth_id, u8 i2c_address, u8 page, + u8 bank, u32 offset, u32 length, + u8 *data) +{ + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct nbl_chan_send_info chan_send; + struct nbl_chan_param_module_eeprom_info param = { 0 }; + u32 xfer_size = 0; + u32 byte_offset = 0; + int data_length = length; + int ret = 0; + + do { + xfer_size = + min_t(u32, data_length, NBL_MODULE_EEPRO_WRITE_MAX_LEN); + data_length -= xfer_size; + + param.eth_id = eth_id; + param.i2c_address = i2c_address; + param.page = page; + param.bank = bank; + param.write = 1; + param.version = 1; + param.offset = offset + byte_offset; + param.length = xfer_size; + memcpy(param.data, data + byte_offset, xfer_size); + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_GET_MODULE_EEPROM, ¶m, + sizeof(param), NULL, 0, 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), + &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed: %d, msg: 0x%x, eth_id:%d, addr:%d,", + ret, NBL_CHAN_MSG_ADMINQ_GET_MODULE_EEPROM, + eth_info->logic_eth_id[eth_id], i2c_address); + dev_err(dev, "page:%d, bank:%d, offset:%d, length:%d\n", + page, bank, offset + byte_offset, xfer_size); + } + byte_offset += xfer_size; + } while (!ret && data_length > 0); + + return ret; +} + +static int nbl_res_aq_turn_module_eeprom_page(struct nbl_resource_mgt *res_mgt, + u8 eth_id, u8 page) +{ + int ret; + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + + ret = nbl_res_aq_set_module_eeprom_info(res_mgt, eth_id, + I2C_DEV_ADDR_A0, 0, 0, + SFF_8636_TURNPAGE_ADDR, 1, + &page); + if (ret) { + dev_err(dev, "eth %d set_module_eeprom_info failed %d\n", + eth_info->logic_eth_id[eth_id], ret); + return -EIO; + } + + return ret; +} + +static int nbl_res_aq_get_module_eeprom(struct nbl_resource_mgt *res_mgt, + u8 eth_id, u8 i2c_address, u8 page, + u8 bank, u32 offset, u32 length, + u8 *data) +{ + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct nbl_chan_send_info chan_send; + struct nbl_chan_param_module_eeprom_info param = { 0 }; + u32 xfer_size = 0; + u32 byte_offset = 0; + int data_length = length; + int ret = 0; + + /* read a maximum of 128 bytes each time */ + do { + xfer_size = min_t(u32, data_length, NBL_MAX_HW_I2C_RESP_SIZE); + data_length -= xfer_size; + + param.eth_id = eth_id; + param.i2c_address = i2c_address; + param.page = page; + param.bank = bank; + param.write = 0; + param.version = 1; + param.offset = offset + byte_offset; + param.length = xfer_size; + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_GET_MODULE_EEPROM, ¶m, + sizeof(param), data + byte_offset, xfer_size, 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), + &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed: %d, msg: 0x%x, eth_id:%d, addr:%d,", + ret, NBL_CHAN_MSG_ADMINQ_GET_MODULE_EEPROM, + eth_info->logic_eth_id[eth_id], i2c_address); + dev_err(dev, "page:%d, bank:%d, offset:%d, length:%d\n", + page, bank, offset + byte_offset, xfer_size); + } + byte_offset += xfer_size; + } while (!ret && data_length > 0); + + return ret; +} + static int nbl_res_aq_set_sfp_state(void *priv, u8 eth_id, u8 state) { struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; @@ -56,6 +323,481 @@ int nbl_res_open_sfp(struct nbl_resource_mgt *res_mgt, u8 eth_id) return nbl_res_aq_set_sfp_state(res_mgt, eth_id, NBL_SFP_MODULE_ON); } +static bool nbl_res_aq_check_fw_heartbeat(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_adminq_mgt *adminq_mgt = NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + unsigned long check_time; + u32 seq_acked; + + if (adminq_mgt->fw_resetting) { + adminq_mgt->fw_last_hb_seq++; + return false; + } + + check_time = jiffies; + if (time_before(check_time, adminq_mgt->fw_last_hb_time + 5 * HZ)) + return true; + + seq_acked = hw_ops->get_fw_pong(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + if (adminq_mgt->fw_last_hb_seq == seq_acked) { + adminq_mgt->fw_last_hb_seq++; + adminq_mgt->fw_last_hb_time = check_time; + hw_ops->set_fw_ping(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + adminq_mgt->fw_last_hb_seq); + return true; + } + + return false; +} + +static bool nbl_res_aq_check_fw_reset(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_adminq_mgt *adminq_mgt = NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u32 seq_acked; + + seq_acked = hw_ops->get_fw_pong(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + if (adminq_mgt->fw_last_hb_seq != seq_acked) { + hw_ops->set_fw_ping(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + adminq_mgt->fw_last_hb_seq); + return false; + } + + adminq_mgt->fw_resetting = false; + wake_up(&adminq_mgt->wait_queue); + return true; +} + +static int nbl_res_aq_get_port_attributes(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_chan_send_info chan_send; + struct nbl_port_key *param; + int param_len = 0; + u64 port_caps = 0; + u64 port_advertising = 0; + u64 key = 0; + int eth_id = 0; + int ret; + + param_len = sizeof(struct nbl_port_key) + 1 * sizeof(u64); + param = kzalloc(param_len, GFP_KERNEL); + if (!param) + return -ENOMEM; + for_each_set_bit(eth_id, eth_info->eth_bitmap, NBL_MAX_ETHERNET) { + key = NBL_PORT_KEY_CAPABILITIES; + port_caps = 0; + + memset(param, 0, param_len); + param->id = eth_id; + param->subop = NBL_PORT_SUBOP_READ; + param->data[0] = key << NBL_PORT_KEY_KEY_SHIFT; + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, param, + param_len, (void *)&port_caps, sizeof(port_caps), + 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), + &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x, eth_id:%d, get_port_caps\n", + ret, NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, + eth_info->logic_eth_id[eth_id]); + kfree(param); + return ret; + } + + eth_info->port_caps[eth_id] = port_caps & + NBL_PORT_KEY_DATA_MASK; + + dev_info(dev, "ctrl dev get eth %d port caps: %llx\n", + eth_info->logic_eth_id[eth_id], + eth_info->port_caps[eth_id]); + } + + for_each_set_bit(eth_id, eth_info->eth_bitmap, NBL_MAX_ETHERNET) { + key = NBL_PORT_KEY_ADVERT; + port_advertising = 0; + + memset(param, 0, param_len); + param->id = eth_id; + param->subop = NBL_PORT_SUBOP_READ; + param->data[0] = key << NBL_PORT_KEY_KEY_SHIFT; + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, param, + param_len, (void *)&port_advertising, + sizeof(port_advertising), 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), + &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x, eth_id:%d, port_advertising\n", + ret, NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, + eth_info->logic_eth_id[eth_id]); + kfree(param); + return ret; + } + + port_advertising = port_advertising & NBL_PORT_KEY_DATA_MASK; + /* set default FEC mode: auto */ + port_advertising = port_advertising & ~NBL_PORT_CAP_FEC_MASK; + port_advertising += BIT(NBL_PORT_CAP_FEC_RS); + port_advertising += BIT(NBL_PORT_CAP_FEC_BASER); + /* set default pause: tx on, rx on */ + port_advertising = port_advertising & ~NBL_PORT_CAP_PAUSE_MASK; + port_advertising += BIT(NBL_PORT_CAP_TX_PAUSE); + port_advertising += BIT(NBL_PORT_CAP_RX_PAUSE); + eth_info->port_advertising[eth_id] = port_advertising; + + dev_info(dev, "ctrl dev get eth %d port advertising: %llx\n", + eth_info->logic_eth_id[eth_id], + eth_info->port_advertising[eth_id]); + } + + kfree(param); + return 0; +} + +static int nbl_res_aq_enable_port(void *priv, bool enable) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_chan_send_info chan_send; + struct nbl_port_key *param; + int param_len = 0; + u64 data = 0; + u64 key = 0; + int eth_id = 0; + int ret; + + param_len = sizeof(struct nbl_port_key) + 1 * sizeof(u64); + param = kzalloc(param_len, GFP_KERNEL); + if (!param) + return -ENOMEM; + if (enable) { + key = NBL_PORT_KEY_ENABLE; + data = NBL_PORT_FLAG_ENABLE_NOTIFY + + (key << NBL_PORT_KEY_KEY_SHIFT); + } else { + key = NBL_PORT_KEY_DISABLE; + data = key << NBL_PORT_KEY_KEY_SHIFT; + } + + for_each_set_bit(eth_id, eth_info->eth_bitmap, NBL_MAX_ETHERNET) { + nbl_res_aq_set_sfp_state(res_mgt, eth_id, NBL_SFP_MODULE_ON); + + memset(param, 0, param_len); + param->id = eth_id; + param->subop = NBL_PORT_SUBOP_WRITE; + param->data[0] = data; + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, param, + param_len, NULL, 0, 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), + &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x, eth_id:%d, %s port\n", + ret, NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, + eth_info->logic_eth_id[eth_id], + enable ? "enable" : "disable"); + kfree(param); + return ret; + } + + dev_info(dev, "ctrl dev %s eth %d\n", + enable ? "enable" : "disable", + eth_info->logic_eth_id[eth_id]); + } + + kfree(param); + return 0; +} + +static int nbl_res_aq_get_special_port_type(struct nbl_resource_mgt *res_mgt, + u8 eth_id) +{ + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + u8 port_type = NBL_PORT_TYPE_UNKNOWN; + u8 cable_tech = 0; + int ret; + + ret = nbl_res_aq_turn_module_eeprom_page(res_mgt, eth_id, 0); + if (ret) { + dev_err(dev, "eth %d get_module_eeprom_info failed %d\n", + eth_info->logic_eth_id[eth_id], ret); + port_type = NBL_PORT_TYPE_UNKNOWN; + return port_type; + } + + ret = nbl_res_aq_get_module_eeprom(res_mgt, eth_id, I2C_DEV_ADDR_A0, 0, + 0, SFF8636_DEVICE_TECH_OFFSET, 1, + &cable_tech); + if (ret) { + dev_err(dev, "eth %d get_module_eeprom_info failed %d\n", + eth_info->logic_eth_id[eth_id], ret); + port_type = NBL_PORT_TYPE_UNKNOWN; + return port_type; + } + cable_tech = (cable_tech >> 4) & 0x0f; + switch (cable_tech) { + case SFF8636_TRANSMIT_FIBER_850nm_VCSEL: + case SFF8636_TRANSMIT_FIBER_1310nm_VCSEL: + case SFF8636_TRANSMIT_FIBER_1550nm_VCSEL: + case SFF8636_TRANSMIT_FIBER_1310nm_FP: + case SFF8636_TRANSMIT_FIBER_1310nm_DFB: + case SFF8636_TRANSMIT_FIBER_1550nm_DFB: + case SFF8636_TRANSMIT_FIBER_1310nm_EML: + case SFF8636_TRANSMIT_FIBER_1550nm_EML: + case SFF8636_TRANSMIT_FIBER_1490nm_DFB: + port_type = NBL_PORT_TYPE_FIBRE; + break; + case SFF8636_TRANSMIT_COPPER_UNEQUA: + case SFF8636_TRANSMIT_COPPER_PASSIVE_EQUALIZED: + case SFF8636_TRANSMIT_COPPER_NEAR_FAR_END: + case SFF8636_TRANSMIT_COPPER_FAR_END: + case SFF8636_TRANSMIT_COPPER_NEAR_END: + case SFF8636_TRANSMIT_COPPER_LINEAR_ACTIVE: + port_type = NBL_PORT_TYPE_COPPER; + break; + default: + dev_err(dev, "eth %d unknown port_type\n", + eth_info->logic_eth_id[eth_id]); + port_type = NBL_PORT_TYPE_UNKNOWN; + break; + } + return port_type; +} + +static int nbl_res_aq_get_common_port_type(struct nbl_resource_mgt *res_mgt, + u8 eth_id) +{ + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + u8 data[SFF_8472_CABLE_SPEC_COMP + 1]; + u8 cable_tech = 0; + u8 cable_comp = 0; + u8 port_type = NBL_PORT_TYPE_UNKNOWN; + int ret; + + ret = nbl_res_aq_get_module_eeprom(res_mgt, eth_id, I2C_DEV_ADDR_A0, 0, + 0, 0, SFF_8472_CABLE_SPEC_COMP + 1, + data); + if (ret) { + dev_err(dev, "eth %d get_module_eeprom_info failed %d\n", + eth_info->logic_eth_id[eth_id], ret); + port_type = NBL_PORT_TYPE_UNKNOWN; + return port_type; + } + + cable_tech = data[SFF_8472_CABLE_TECHNOLOGY]; + + if (cable_tech & SFF_PASSIVE_CABLE) { + cable_comp = data[SFF_8472_CABLE_SPEC_COMP]; + + /* determine if the port is a cooper cable */ + if (cable_comp == SFF_COPPER_UNSPECIFIED || + cable_comp == SFF_COPPER_8431_APPENDIX_E) + port_type = NBL_PORT_TYPE_COPPER; + else + port_type = NBL_PORT_TYPE_FIBRE; + } else if (cable_tech & SFF_ACTIVE_CABLE) { + cable_comp = data[SFF_8472_CABLE_SPEC_COMP]; + + /* determine if the port is a cooper cable */ + if (cable_comp == SFF_COPPER_UNSPECIFIED || + cable_comp == SFF_COPPER_8431_APPENDIX_E || + cable_comp == SFF_COPPER_8431_LIMITING) + port_type = NBL_PORT_TYPE_COPPER; + else + port_type = NBL_PORT_TYPE_FIBRE; + } else { + port_type = NBL_PORT_TYPE_FIBRE; + } + + return port_type; +} + +static int nbl_res_aq_get_port_type(struct nbl_resource_mgt *res_mgt, u8 eth_id) +{ + if (res_mgt->resource_info->board_info.eth_speed == + NBL_FW_PORT_SPEED_100G) + return nbl_res_aq_get_special_port_type(res_mgt, eth_id); + + return nbl_res_aq_get_common_port_type(res_mgt, eth_id); +} + +static s32 nbl_res_aq_get_module_bitrate(struct nbl_resource_mgt *res_mgt, + u8 eth_id) +{ + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + u8 data[SFF_8472_SIGNALING_RATE_MAX + 1]; + u32 result; + u8 br_nom; + u8 br_max; + u8 identifier; + u8 encoding = 0; + int port_max_rate; + int ret; + + if (res_mgt->resource_info->board_info.eth_speed == + NBL_FW_PORT_SPEED_100G) { + ret = nbl_res_aq_turn_module_eeprom_page(res_mgt, eth_id, 0); + if (ret) { + dev_err(dev, + "eth %d get_module_eeprom_info failed %d\n", + eth_info->logic_eth_id[eth_id], ret); + return NBL_PORT_MAX_RATE_UNKNOWN; + } + } + + ret = nbl_res_aq_get_module_eeprom(res_mgt, eth_id, I2C_DEV_ADDR_A0, 0, + 0, 0, + SFF_8472_SIGNALING_RATE_MAX + 1, + data); + if (ret) { + dev_err(dev, "eth %d get_module_eeprom_info failed %d\n", + eth_info->logic_eth_id[eth_id], ret); + return NBL_PORT_MAX_RATE_UNKNOWN; + } + + if (res_mgt->resource_info->board_info.eth_speed == + NBL_FW_PORT_SPEED_100G) { + ret = nbl_res_aq_get_module_eeprom(res_mgt, eth_id, + I2C_DEV_ADDR_A0, 0, 0, + SFF_8636_VENDOR_ENCODING, 1, + &encoding); + if (ret) { + dev_err(dev, + "eth %d get_module_eeprom_info failed %d\n", + eth_info->logic_eth_id[eth_id], ret); + return NBL_PORT_MAX_RATE_UNKNOWN; + } + } + + br_nom = data[SFF_8472_SIGNALING_RATE]; + br_max = data[SFF_8472_SIGNALING_RATE_MAX]; + identifier = data[SFF_8472_IDENTIFIER]; + + /* sff-8472 section 5.6 */ + if (br_nom == 255) + result = (u32)br_max * 250; + else if (br_nom == 0) + result = 0; + else + result = (u32)br_nom * 100; + + switch (result / 1000) { + case 25: + port_max_rate = NBL_PORT_MAX_RATE_25G; + break; + case 10: + port_max_rate = NBL_PORT_MAX_RATE_10G; + break; + case 1: + port_max_rate = NBL_PORT_MAX_RATE_1G; + break; + default: + port_max_rate = NBL_PORT_MAX_RATE_UNKNOWN; + break; + } + + if (identifier == SFF_IDENTIFIER_QSFP28) + port_max_rate = NBL_PORT_MAX_RATE_100G; + + if (identifier == SFF_IDENTIFIER_PAM4 || + encoding == SFF_8636_ENCODING_PAM4) + port_max_rate = NBL_PORT_MAX_RATE_100G_PAM4; + + return port_max_rate; +} + +static void nbl_res_eth_task_schedule(struct nbl_adminq_mgt *adminq_mgt) +{ + nbl_common_queue_work(&adminq_mgt->eth_task, true); +} + +static void nbl_res_aq_recv_port_notify(void *priv, void *data) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_adminq_mgt *adminq_mgt = NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_port_notify *notify; + u8 last_module_inplace = 0; + u8 last_link_state = 0; + int eth_id = 0; + + notify = (struct nbl_port_notify *)data; + eth_id = notify->id; + + dev_info(dev, + "eth_id:%d link_state:%d, module_inplace:%d, speed:%d, flow_ctrl:%d, fec:%d, advertising:%llx, lp_advertising:%llx\n", + eth_info->logic_eth_id[eth_id], notify->link_state, + notify->module_inplace, notify->speed * 10, notify->flow_ctrl, + notify->fec, notify->advertising, notify->lp_advertising); + + mutex_lock(&adminq_mgt->eth_lock); + + last_module_inplace = eth_info->module_inplace[eth_id]; + last_link_state = eth_info->link_state[eth_id]; + + if (!notify->link_state) + eth_info->link_down_count[eth_id]++; + + eth_info->link_state[eth_id] = notify->link_state; + eth_info->module_inplace[eth_id] = notify->module_inplace; + /* when eth link down, don not update speed + * when config autoneg to off, ethtool read speed and set it with + * disable autoneg command, if eth is link down, the speed from emp + * is not credible, need to reserver last link up speed. + */ + if (notify->link_state || !eth_info->link_speed[eth_id]) + eth_info->link_speed[eth_id] = notify->speed * 10; + eth_info->active_fc[eth_id] = notify->flow_ctrl; + eth_info->active_fec[eth_id] = notify->fec; + eth_info->port_lp_advertising[eth_id] = notify->lp_advertising; + eth_info->port_advertising[eth_id] = notify->advertising; + + if (!last_module_inplace && notify->module_inplace) { + adminq_mgt->module_inplace_changed[eth_id] = 1; + nbl_res_eth_task_schedule(adminq_mgt); + } + + if (last_link_state != notify->link_state) { + adminq_mgt->link_state_changed[eth_id] = 1; + nbl_res_eth_task_schedule(adminq_mgt); + } + + mutex_unlock(&adminq_mgt->eth_lock); +} + +static int +nbl_res_aq_get_link_state(void *priv, u8 eth_id, + struct nbl_eth_link_info *eth_link_info) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + + eth_link_info->link_status = eth_info->link_state[eth_id]; + eth_link_info->link_speed = eth_info->link_speed[eth_id]; + + return 0; +} + static int nbl_res_aq_get_eth_mac_addr(void *priv, u8 *mac, u8 eth_id) { struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; @@ -108,3 +850,597 @@ int nbl_res_get_eth_mac(struct nbl_resource_mgt *res_mgt, u8 *mac, u8 eth_id) { return nbl_res_aq_get_eth_mac_addr(res_mgt, mac, eth_id); } + +static int nbl_res_aq_set_eth_mac_addr(void *priv, u8 *mac, u8 eth_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_chan_send_info chan_send; + struct nbl_port_key *param; + int param_len = 0; + u64 data = 0; + u64 key = 0; + int ret; + int i; + u8 reverse_mac[ETH_ALEN]; + + param_len = sizeof(struct nbl_port_key) + 1 * sizeof(u64); + param = kzalloc(param_len, GFP_KERNEL); + if (!param) + return -ENOMEM; + key = NBL_PORT_KEY_MAC_ADDRESS; + + /*convert mac address*/ + for (i = 0; i < ETH_ALEN; i++) + reverse_mac[i] = mac[ETH_ALEN - 1 - i]; + + memcpy(&data, reverse_mac, ETH_ALEN); + + data += (key << NBL_PORT_KEY_KEY_SHIFT); + + memset(param, 0, param_len); + param->id = eth_id; + param->subop = NBL_PORT_SUBOP_WRITE; + param->data[0] = data; + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, param, + param_len, NULL, 0, 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x, eth_id:%d, reverse_mac=0x%x:%x:%x:%x:%x:%x\n", + ret, NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES, + eth_info->logic_eth_id[eth_id], reverse_mac[0], + reverse_mac[1], reverse_mac[2], reverse_mac[3], + reverse_mac[4], reverse_mac[5]); + kfree(param); + return ret; + } + + kfree(param); + return 0; +} + +static int +nbl_res_aq_pt_filter_in(struct nbl_resource_mgt *res_mgt, + struct nbl_passthrough_fw_cmd *param) +{ + struct nbl_adminq_mgt *adminq_mgt = NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + struct nbl_res_fw_cmd_filter *filter; + + filter = nbl_common_get_hash_node(adminq_mgt->cmd_filter, + ¶m->opcode); + if (filter && filter->in) + return filter->in(res_mgt, param->data, param->in_size); + + return 0; +} + +static int nbl_res_aq_pt_filter_out(struct nbl_resource_mgt *res_mgt, + struct nbl_passthrough_fw_cmd *param, + struct nbl_passthrough_fw_cmd *result) +{ + struct nbl_adminq_mgt *adminq_mgt = NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + struct nbl_res_fw_cmd_filter *filter; + int ret = 0; + + filter = nbl_common_get_hash_node(adminq_mgt->cmd_filter, + ¶m->opcode); + if (filter && filter->out) + ret = filter->out(res_mgt, param->data, param->in_size, + result->data, result->out_size); + + return ret; +} + +static int nbl_res_aq_passthrough(void *priv, + struct nbl_passthrough_fw_cmd *param, + struct nbl_passthrough_fw_cmd *result) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); + struct nbl_chan_send_info chan_send; + u8 *in_data = NULL, *out_data = NULL; + int ret = 0; + + ret = nbl_res_aq_pt_filter_in(res_mgt, param); + if (ret) + return ret; + + if (param->in_size) { + in_data = kzalloc(param->in_size, GFP_KERNEL); + if (!in_data) + goto in_data_fail; + memcpy(in_data, param->data, param->in_size); + } + if (param->out_size) { + out_data = kzalloc(param->out_size, GFP_KERNEL); + if (!out_data) + goto out_data_fail; + } + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, param->opcode, + in_data, param->in_size, out_data, param->out_size, 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) { + dev_dbg(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x\n", + ret, param->opcode); + goto send_fail; + } + + result->opcode = param->opcode; + result->errcode = ret; + result->out_size = param->out_size; + if (result->out_size) + memcpy(result->data, out_data, param->out_size); + + nbl_res_aq_pt_filter_out(res_mgt, param, result); + +send_fail: + kfree(out_data); +out_data_fail: + kfree(in_data); +in_data_fail: + return ret; +} + +static int nbl_res_aq_update_ring_num(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_resource_info *res_info = NBL_RES_MGT_TO_RES_INFO(res_mgt); + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt)); + struct nbl_chan_send_info chan_send; + struct nbl_chan_resource_read_param *param; + struct nbl_net_ring_num_info *info; + int ret = 0; + + param = kzalloc(sizeof(*param), GFP_KERNEL); + if (!param) { + ret = -ENOMEM; + goto alloc_param_fail; + } + + info = kzalloc(sizeof(*info), GFP_KERNEL); + if (!info) { + ret = -ENOMEM; + goto alloc_info_fail; + } + + param->resid = NBL_ADMINQ_PFA_TLV_NET_RING_NUM; + param->offset = 0; + param->len = sizeof(*info); + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_RESOURCE_READ, param, sizeof(*param), + info, sizeof(*info), 1); + + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x\n", + ret, NBL_CHAN_MSG_ADMINQ_RESOURCE_READ); + goto send_fail; + } + + if (info->pf_def_max_net_qp_num && info->vf_def_max_net_qp_num && + !nbl_res_aq_chk_net_ring_num(res_mgt, + (struct nbl_cmd_net_ring_num *)info)) + memcpy(&res_info->net_ring_num_info, info, + sizeof(res_info->net_ring_num_info)); + +send_fail: + kfree(info); +alloc_info_fail: + kfree(param); +alloc_param_fail: + return ret; +} + +static int nbl_res_aq_set_ring_num(void *priv, + struct nbl_cmd_net_ring_num *param) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt)); + struct nbl_chan_send_info chan_send; + struct nbl_chan_resource_write_param *data; + int data_len = sizeof(struct nbl_cmd_net_ring_num); + int ret = 0; + + data = kzalloc(sizeof(*data) + data_len, GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->resid = NBL_ADMINQ_PFA_TLV_NET_RING_NUM; + data->offset = 0; + data->len = data_len; + + memcpy(data + 1, param, data_len); + + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_RESOURCE_WRITE, data, + sizeof(*data) + data_len, NULL, 0, 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) + dev_err(dev, "adminq send msg failed with ret: %d\n", ret); + + kfree(data); + return ret; +} + +static int nbl_res_aq_set_wol(void *priv, u8 eth_id, bool enable) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt)); + struct nbl_chan_send_info chan_send; + struct nbl_chan_adminq_reg_write_param reg_write = { 0 }; + struct nbl_chan_adminq_reg_read_param reg_read = { 0 }; + u32 value; + int ret = 0; + + dev_info(dev, "set_wol ethid %d %sabled", eth_id, + enable ? "en" : "dis"); + + reg_read.reg = NBL_ADMINQ_ETH_WOL_REG_OFFSET; + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_REGISTER_READ, ®_read, + sizeof(reg_read), &value, sizeof(value), 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) { + dev_err(dev, "adminq send msg failed with ret: %d\n", ret); + return ret; + } + + reg_write.reg = NBL_ADMINQ_ETH_WOL_REG_OFFSET; + reg_write.value = (value & ~(1 << eth_id)) | (enable << eth_id); + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_REGISTER_WRITE, ®_write, + sizeof(reg_write), NULL, 0, 1); + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) + dev_err(dev, "adminq send msg failed with ret: %d\n", ret); + + return ret; +} + +static int nbl_res_get_part_number(void *priv, char *part_number) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt)); + struct nbl_chan_send_info chan_send; + struct nbl_chan_resource_read_param *param; + struct nbl_host_board_config *info; + int ret = 0; + + param = kzalloc(sizeof(*param), GFP_KERNEL); + if (!param) { + ret = -ENOMEM; + goto alloc_param_fail; + } + + info = kzalloc(sizeof(*info), GFP_KERNEL); + if (!info) { + ret = -ENOMEM; + goto alloc_info_fail; + } + + param->resid = NBL_ADMINQ_RESID_FSI_SECTION_HBC; + param->offset = 0; + param->len = sizeof(*info); + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_RESOURCE_READ, param, sizeof(*param), + info, sizeof(*info), 1); + + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x, resid: 0x%x\n", + ret, NBL_CHAN_MSG_ADMINQ_RESOURCE_READ, + NBL_ADMINQ_RESID_FSI_SECTION_HBC); + goto send_fail; + } + + memcpy(part_number, info->product_name, sizeof(info->product_name)); + +send_fail: + kfree(info); +alloc_info_fail: + kfree(param); +alloc_param_fail: + return ret; +} + +static int nbl_res_get_serial_number(void *priv, char *serial_number) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct device *dev = NBL_COMMON_TO_DEV(NBL_RES_MGT_TO_COMMON(res_mgt)); + struct nbl_chan_send_info chan_send; + struct nbl_chan_resource_read_param *param; + struct nbl_serial_number_info *info; + int ret = 0; + + param = kzalloc(sizeof(*param), GFP_KERNEL); + if (!param) { + ret = -ENOMEM; + goto alloc_param_fail; + } + + info = kzalloc(sizeof(*info), GFP_KERNEL); + if (!info) { + ret = -ENOMEM; + goto alloc_info_fail; + } + + param->resid = NBL_ADMINQ_RESID_FSI_TLV_SERIAL_NUMBER; + param->offset = 0; + param->len = sizeof(*info); + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_RESOURCE_READ, param, sizeof(*param), + info, sizeof(*info), 1); + + ret = chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), &chan_send); + if (ret) { + dev_err(dev, + "adminq send msg failed with ret: %d, msg_type: 0x%x, resid: 0x%x\n", + ret, NBL_CHAN_MSG_ADMINQ_RESOURCE_READ, + NBL_ADMINQ_RESID_FSI_TLV_SERIAL_NUMBER); + goto send_fail; + } + memcpy(serial_number, info->sn, info->len); + +send_fail: + kfree(info); +alloc_info_fail: + kfree(param); +alloc_param_fail: + return ret; +} + +/* NBL_ADMINQ_SET_OPS(ops_name, func) + * + * Use X Macros to reduce setup and remove codes. + */ +#define NBL_ADMINQ_OPS_TBL \ +do { \ + NBL_ADMINQ_SET_OPS(set_sfp_state, nbl_res_aq_set_sfp_state);\ + NBL_ADMINQ_SET_OPS(check_fw_heartbeat, \ + nbl_res_aq_check_fw_heartbeat); \ + NBL_ADMINQ_SET_OPS(check_fw_reset, \ + nbl_res_aq_check_fw_reset); \ + NBL_ADMINQ_SET_OPS(get_port_attributes, \ + nbl_res_aq_get_port_attributes); \ + NBL_ADMINQ_SET_OPS(update_ring_num, \ + nbl_res_aq_update_ring_num); \ + NBL_ADMINQ_SET_OPS(set_ring_num, nbl_res_aq_set_ring_num); \ + NBL_ADMINQ_SET_OPS(enable_port, nbl_res_aq_enable_port); \ + NBL_ADMINQ_SET_OPS(recv_port_notify, \ + nbl_res_aq_recv_port_notify); \ + NBL_ADMINQ_SET_OPS(get_link_state, \ + nbl_res_aq_get_link_state); \ + NBL_ADMINQ_SET_OPS(set_eth_mac_addr, \ + nbl_res_aq_set_eth_mac_addr); \ + NBL_ADMINQ_SET_OPS(set_wol, nbl_res_aq_set_wol); \ + NBL_ADMINQ_SET_OPS(passthrough_fw_cmd, \ + nbl_res_aq_passthrough); \ + NBL_ADMINQ_SET_OPS(get_part_number, nbl_res_get_part_number); \ + NBL_ADMINQ_SET_OPS(get_serial_number, nbl_res_get_serial_number);\ +} while (0) + +/* Structure starts here, adding an op should not modify anything below */ +static int nbl_adminq_setup_mgt(struct device *dev, + struct nbl_adminq_mgt **adminq_mgt) +{ + *adminq_mgt = + devm_kzalloc(dev, sizeof(struct nbl_adminq_mgt), GFP_KERNEL); + if (!*adminq_mgt) + return -ENOMEM; + + init_waitqueue_head(&(*adminq_mgt)->wait_queue); + return 0; +} + +static void nbl_adminq_remove_mgt(struct device *dev, + struct nbl_adminq_mgt **adminq_mgt) +{ + devm_kfree(dev, *adminq_mgt); + *adminq_mgt = NULL; +} + +static int +nbl_res_aq_chan_notify_link_state_req(struct nbl_resource_mgt *res_mgt, u16 fid, + u8 link_state, u32 link_speed) +{ + struct nbl_channel_ops *chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + struct nbl_chan_send_info chan_send; + struct nbl_chan_param_notify_link_state link_info = { 0 }; + + chan_ops = NBL_RES_MGT_TO_CHAN_OPS(res_mgt); + + link_info.link_state = link_state; + link_info.link_speed = link_speed; + NBL_CHAN_SEND(chan_send, fid, NBL_CHAN_MSG_NOTIFY_LINK_STATE, + &link_info, sizeof(link_info), NULL, 0, 0); + return chan_ops->send_msg(NBL_RES_MGT_TO_CHAN_PRIV(res_mgt), + &chan_send); +} + +static void nbl_res_aq_notify_link_state(struct nbl_resource_mgt *res_mgt, + u8 eth_id, u8 state) +{ + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_sriov_info *sriov_info; + struct nbl_queue_info *queue_info; + u16 pf_fid = 0, vf_fid = 0, speed = 0; + int i = 0, j = 0; + + for (i = 0; i < NBL_RES_MGT_TO_PF_NUM(res_mgt); i++) { + if (eth_info->pf_bitmap[eth_id] & BIT(i)) + pf_fid = nbl_res_pfvfid_to_func_id(res_mgt, i, -1); + else + continue; + + sriov_info = &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt)[pf_fid]; + queue_info = &queue_mgt->queue_info[pf_fid]; + speed = eth_info->link_speed[eth_id]; + /* send eth's link state to pf */ + if (queue_info->num_txrx_queues) { + nbl_res_aq_chan_notify_link_state_req(res_mgt, pf_fid, + state, speed); + } + + /* send eth's link state to pf's all vf */ + for (j = 0; j < sriov_info->num_vfs; j++) { + vf_fid = sriov_info->start_vf_func_id + j; + queue_info = &queue_mgt->queue_info[vf_fid]; + if (queue_info->num_txrx_queues) { + nbl_res_aq_chan_notify_link_state_req(res_mgt, + vf_fid, + state, + speed); + } + } + } +} + +static void nbl_res_aq_eth_task(struct work_struct *work) +{ + struct nbl_adminq_mgt *adminq_mgt = + container_of(work, struct nbl_adminq_mgt, eth_task); + struct nbl_resource_mgt *res_mgt = adminq_mgt->res_mgt; + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + u8 eth_id = 0; + u8 max_rate = 0; + u8 link_state; + + for (eth_id = 0; eth_id < NBL_MAX_ETHERNET; eth_id++) { + if (adminq_mgt->module_inplace_changed[eth_id]) { + /* module not-inplace, transitions to inplace status */ + /* read module register */ + max_rate = + nbl_res_aq_get_module_bitrate(res_mgt, eth_id); + + eth_info->port_max_rate[eth_id] = max_rate; + eth_info->port_type[eth_id] = + nbl_res_aq_get_port_type(res_mgt, eth_id); + eth_info->module_repluged[eth_id] = 1; + /* cooper support auto-negotiation */ + if (eth_info->port_type[eth_id] == NBL_PORT_TYPE_COPPER) + eth_info->port_caps[eth_id] |= + BIT(NBL_PORT_CAP_AUTONEG); + else + eth_info->port_caps[eth_id] &= + ~BIT_MASK(NBL_PORT_CAP_AUTONEG); + + adminq_mgt->module_inplace_changed[eth_id] = 0; + } + + mutex_lock(&adminq_mgt->eth_lock); + if (adminq_mgt->link_state_changed[eth_id]) { + link_state = eth_info->link_state[eth_id]; + /* eth link state changed, notify pf and vf */ + nbl_res_aq_notify_link_state(res_mgt, eth_id, + link_state); + adminq_mgt->link_state_changed[eth_id] = 0; + } + mutex_unlock(&adminq_mgt->eth_lock); + } +} + +static int nbl_res_aq_setup_cmd_filter(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_adminq_mgt *adminq_mgt = NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_hash_tbl_key tbl_key = { 0 }; + + NBL_HASH_TBL_KEY_INIT(&tbl_key, NBL_COMMON_TO_DEV(common), sizeof(u16), + sizeof(struct nbl_res_fw_cmd_filter), + NBL_RES_FW_CMD_FILTER_MAX, false); + + adminq_mgt->cmd_filter = nbl_common_init_hash_table(&tbl_key); + if (!adminq_mgt->cmd_filter) + return -EFAULT; + + return 0; +} + +static void nbl_res_aq_remove_cmd_filter(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_adminq_mgt *adminq_mgt = NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + + if (adminq_mgt->cmd_filter) + nbl_common_remove_hash_table(adminq_mgt->cmd_filter, NULL); + + adminq_mgt->cmd_filter = NULL; +} + +int nbl_adminq_mgt_start(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_adminq_mgt **adminq_mgt = + &NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + int ret; + + ret = nbl_adminq_setup_mgt(dev, adminq_mgt); + if (ret) + goto setup_mgt_fail; + + (*adminq_mgt)->res_mgt = res_mgt; + + (*adminq_mgt)->fw_last_hb_seq = + (u32)hw_ops->get_fw_pong(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + + INIT_WORK(&(*adminq_mgt)->eth_task, nbl_res_aq_eth_task); + mutex_init(&(*adminq_mgt)->eth_lock); + + ret = nbl_res_aq_setup_cmd_filter(res_mgt); + if (ret) + goto set_filter_fail; + + nbl_res_aq_add_cmd_filter_res_write(res_mgt); + + return 0; + +set_filter_fail: + cancel_work_sync(&((*adminq_mgt)->eth_task)); + nbl_adminq_remove_mgt(dev, adminq_mgt); +setup_mgt_fail: + return ret; +} + +void nbl_adminq_mgt_stop(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_adminq_mgt **adminq_mgt = + &NBL_RES_MGT_TO_ADMINQ_MGT(res_mgt); + + if (!(*adminq_mgt)) + return; + + nbl_res_aq_remove_cmd_filter(res_mgt); + + cancel_work_sync(&((*adminq_mgt)->eth_task)); + nbl_adminq_remove_mgt(dev, adminq_mgt); +} + +int nbl_adminq_setup_ops(struct nbl_resource_ops *res_ops) +{ +#define NBL_ADMINQ_SET_OPS(name, func) \ + do { \ + res_ops->NBL_NAME(name) = func; \ + ; \ + } while (0) + NBL_ADMINQ_OPS_TBL; +#undef NBL_ADMINQ_SET_OPS + + return 0; +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c index cc792497d01f..4ee35f46c785 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c @@ -4,19 +4,1632 @@ * Author: */ +#include <linux/if_bridge.h> + #include "nbl_hw_leonis.h" +#include "nbl_hw/nbl_hw_leonis/base/nbl_datapath.h" +#include "nbl_hw/nbl_hw_leonis/base/nbl_ppe.h" +#include "nbl_hw_leonis_regs.h" + static u32 nbl_hw_get_quirks(void *priv) { - struct nbl_hw_mgt *hw_mgt = priv; - u32 quirks; + struct nbl_hw_mgt *hw_mgt = priv; + u32 quirks; + + nbl_hw_read_mbx_regs(hw_mgt, NBL_LEONIS_QUIRKS_OFFSET, (u8 *)&quirks, + sizeof(u32)); + + if (quirks == NBL_LEONIS_ILLEGAL_REG_VALUE) + return 0; + + return quirks; +} + +static void nbl_configure_dped_checksum(struct nbl_hw_mgt *hw_mgt) +{ + union dped_l4_ck_cmd_40_u l4_ck_cmd_40; + + /* DPED dped_l4_ck_cmd_40 for sctp */ + nbl_hw_rd_regs(hw_mgt, NBL_DPED_L4_CK_CMD_40_ADDR, (u8 *)&l4_ck_cmd_40, + sizeof(l4_ck_cmd_40)); + l4_ck_cmd_40.info.en = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DPED_L4_CK_CMD_40_ADDR, (u8 *)&l4_ck_cmd_40, + sizeof(l4_ck_cmd_40)); +} + +static int nbl_dped_init(struct nbl_hw_mgt *hw_mgt) +{ + nbl_hw_wr32(hw_mgt, NBL_DPED_VLAN_OFFSET, 0xC); + nbl_hw_wr32(hw_mgt, NBL_DPED_DSCP_OFFSET_0, 0x8); + nbl_hw_wr32(hw_mgt, NBL_DPED_DSCP_OFFSET_1, 0x4); + + // dped checksum offload + nbl_configure_dped_checksum(hw_mgt); + + return 0; +} + +static int nbl_uped_init(struct nbl_hw_mgt *hw_mgt) +{ + struct ped_hw_edit_profile hw_edit; + + nbl_hw_rd_regs(hw_mgt, NBL_UPED_HW_EDT_PROF_TABLE(5), (u8 *)&hw_edit, + sizeof(hw_edit)); + hw_edit.l3_len = 0; + nbl_hw_wr_regs(hw_mgt, NBL_UPED_HW_EDT_PROF_TABLE(5), (u8 *)&hw_edit, + sizeof(hw_edit)); + + nbl_hw_rd_regs(hw_mgt, NBL_UPED_HW_EDT_PROF_TABLE(6), (u8 *)&hw_edit, + sizeof(hw_edit)); + hw_edit.l3_len = 1; + nbl_hw_wr_regs(hw_mgt, NBL_UPED_HW_EDT_PROF_TABLE(6), (u8 *)&hw_edit, + sizeof(hw_edit)); + + return 0; +} + +static void nbl_shaping_eth_init(struct nbl_hw_mgt *hw_mgt, u8 eth_id, u8 speed) +{ + struct nbl_shaping_dport dport = { 0 }; + struct nbl_shaping_dvn_dport dvn_dport = { 0 }; + u32 rate, half_rate; + + if (speed == NBL_FW_PORT_SPEED_100G) { + rate = NBL_SHAPING_DPORT_100G_RATE; + half_rate = NBL_SHAPING_DPORT_HALF_100G_RATE; + } else { + rate = NBL_SHAPING_DPORT_25G_RATE; + half_rate = NBL_SHAPING_DPORT_HALF_25G_RATE; + } + + dport.cir = rate; + dport.pir = rate; + dport.depth = max(dport.cir * 2, NBL_LR_LEONIS_NET_BUCKET_DEPTH); + dport.cbs = dport.depth; + dport.pbs = dport.depth; + dport.valid = 1; + + dvn_dport.cir = half_rate; + dvn_dport.pir = rate; + dvn_dport.depth = dport.depth; + dvn_dport.cbs = dvn_dport.depth; + dvn_dport.pbs = dvn_dport.depth; + dvn_dport.valid = 1; + + nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_DPORT_REG(eth_id), (u8 *)&dport, + sizeof(dport)); + nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_DVN_DPORT_REG(eth_id), + (u8 *)&dvn_dport, sizeof(dvn_dport)); +} + +static int nbl_shaping_init(struct nbl_hw_mgt *hw_mgt, u8 speed) +{ + struct dsch_psha_en psha_en = { 0 }; + struct nbl_shaping_net net_shaping = { 0 }; + + int i; + + for (i = 0; i < NBL_MAX_ETHERNET; i++) + nbl_shaping_eth_init(hw_mgt, i, speed); + + psha_en.en = 0xF; + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_PSHA_EN_ADDR, (u8 *)&psha_en, + sizeof(psha_en)); + + for (i = 0; i < NBL_MAX_FUNC; i++) + nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_NET_REG(i), + (u8 *)&net_shaping, sizeof(net_shaping)); + return 0; +} + +static int nbl_dsch_qid_max_init(struct nbl_hw_mgt *hw_mgt) +{ + struct dsch_vn_quanta quanta = { 0 }; + + quanta.h_qua = NBL_HOST_QUANTA; + quanta.e_qua = NBL_ECPU_QUANTA; + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_QUANTA_ADDR, (u8 *)&quanta, + sizeof(quanta)); + nbl_hw_wr32(hw_mgt, NBL_DSCH_HOST_QID_MAX, NBL_MAX_QUEUE_ID); + + nbl_hw_wr32(hw_mgt, NBL_DVN_ECPU_QUEUE_NUM, 0); + nbl_hw_wr32(hw_mgt, NBL_UVN_ECPU_QUEUE_NUM, 0); + + return 0; +} + +static int nbl_ustore_init(struct nbl_hw_mgt *hw_mgt, u8 eth_num) +{ + struct ustore_pkt_len pkt_len; + struct nbl_ustore_port_drop_th drop_th; + int i; + + nbl_hw_rd_regs(hw_mgt, NBL_USTORE_PKT_LEN_ADDR, (u8 *)&pkt_len, + sizeof(pkt_len)); + /* min arp packet length 42 (14 + 28) */ + pkt_len.min = 42; + nbl_hw_wr_regs(hw_mgt, NBL_USTORE_PKT_LEN_ADDR, (u8 *)&pkt_len, + sizeof(pkt_len)); + + drop_th.en = 1; + if (eth_num == 1) + drop_th.disc_th = NBL_USTORE_SIGNLE_ETH_DROP_TH; + else if (eth_num == 2) + drop_th.disc_th = NBL_USTORE_DUAL_ETH_DROP_TH; + else + drop_th.disc_th = NBL_USTORE_QUAD_ETH_DROP_TH; + + for (i = 0; i < 4; i++) + nbl_hw_wr_regs(hw_mgt, NBL_USTORE_PORT_DROP_TH_REG_ARR(i), + (u8 *)&drop_th, sizeof(drop_th)); + + for (i = 0; i < NBL_MAX_ETHERNET; i++) { + nbl_hw_rd32(hw_mgt, NBL_USTORE_BUF_PORT_DROP_PKT(i)); + nbl_hw_rd32(hw_mgt, NBL_USTORE_BUF_PORT_TRUN_PKT(i)); + } + + return 0; +} + +static int nbl_dstore_init(struct nbl_hw_mgt *hw_mgt, u8 speed) +{ + struct dstore_d_dport_fc_th fc_th; + struct dstore_port_drop_th drop_th; + struct dstore_disc_bp_th bp_th; + int i; + + for (i = 0; i < 6; i++) { + nbl_hw_rd_regs(hw_mgt, NBL_DSTORE_PORT_DROP_TH_REG(i), + (u8 *)&drop_th, sizeof(drop_th)); + drop_th.en = 0; + nbl_hw_wr_regs(hw_mgt, NBL_DSTORE_PORT_DROP_TH_REG(i), + (u8 *)&drop_th, sizeof(drop_th)); + } + + nbl_hw_rd_regs(hw_mgt, NBL_DSTORE_DISC_BP_TH, (u8 *)&bp_th, + sizeof(bp_th)); + bp_th.en = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DSTORE_DISC_BP_TH, (u8 *)&bp_th, + sizeof(bp_th)); + + for (i = 0; i < 4; i++) { + nbl_hw_rd_regs(hw_mgt, NBL_DSTORE_D_DPORT_FC_TH_REG(i), + (u8 *)&fc_th, sizeof(fc_th)); + if (speed == NBL_FW_PORT_SPEED_100G) { + fc_th.xoff_th = NBL_DSTORE_DROP_XOFF_TH_100G; + fc_th.xon_th = NBL_DSTORE_DROP_XON_TH_100G; + } else { + fc_th.xoff_th = NBL_DSTORE_DROP_XOFF_TH; + fc_th.xon_th = NBL_DSTORE_DROP_XON_TH; + } + + fc_th.fc_en = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DSTORE_D_DPORT_FC_TH_REG(i), + (u8 *)&fc_th, sizeof(fc_th)); + } + + return 0; +} + +static int nbl_ul4s_init(struct nbl_hw_mgt *hw_mgt) +{ + struct ul4s_sch_pad sch_pad; + + nbl_hw_rd_regs(hw_mgt, NBL_UL4S_SCH_PAD_ADDR, (u8 *)&sch_pad, + sizeof(sch_pad)); + sch_pad.en = 1; + nbl_hw_wr_regs(hw_mgt, NBL_UL4S_SCH_PAD_ADDR, (u8 *)&sch_pad, + sizeof(sch_pad)); + + return 0; +} + +static void nbl_dvn_descreq_num_cfg(void *priv, u32 descreq_num) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_dvn_descreq_num_cfg descreq_num_cfg = { 0 }; + u32 packet_ring_prefect_num = descreq_num & 0xffff; + u32 split_ring_prefect_num = (descreq_num >> 16) & 0xffff; + + packet_ring_prefect_num = + packet_ring_prefect_num > 32 ? 32 : packet_ring_prefect_num; + packet_ring_prefect_num = + packet_ring_prefect_num < 8 ? 8 : packet_ring_prefect_num; + descreq_num_cfg.packed_l1_num = (packet_ring_prefect_num - 8) / 4; + + split_ring_prefect_num = + split_ring_prefect_num > 16 ? 16 : split_ring_prefect_num; + split_ring_prefect_num = + split_ring_prefect_num < 8 ? 8 : split_ring_prefect_num; + descreq_num_cfg.avring_cfg_num = split_ring_prefect_num > 8 ? 1 : 0; + + nbl_hw_wr_regs(hw_mgt, NBL_DVN_DESCREQ_NUM_CFG, (u8 *)&descreq_num_cfg, + sizeof(descreq_num_cfg)); +} + +static int nbl_dvn_init(struct nbl_hw_mgt *hw_mgt, u8 speed) +{ + struct nbl_dvn_desc_wr_merge_timeout timeout = { 0 }; + struct nbl_dvn_dif_req_rd_ro_flag ro_flag = { 0 }; + + timeout.cfg_cycle = DEFAULT_DVN_DESC_WR_MERGE_TIMEOUT_MAX; + nbl_hw_wr_regs(hw_mgt, NBL_DVN_DESC_WR_MERGE_TIMEOUT, (u8 *)&timeout, + sizeof(timeout)); + + ro_flag.rd_desc_ro_en = 1; + ro_flag.rd_data_ro_en = 1; + ro_flag.rd_avring_ro_en = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DVN_DIF_REQ_RD_RO_FLAG, (u8 *)&ro_flag, + sizeof(ro_flag)); + + if (speed == NBL_FW_PORT_SPEED_100G) + nbl_dvn_descreq_num_cfg(hw_mgt, + DEFAULT_DVN_100G_DESCREQ_NUMCFG); + else + nbl_dvn_descreq_num_cfg(hw_mgt, DEFAULT_DVN_DESCREQ_NUMCFG); + + return 0; +} + +static int nbl_uvn_init(struct nbl_hw_mgt *hw_mgt) +{ + struct uvn_desc_prefetch_init prefetch_init = { 0 }; + struct uvn_desc_wr_timeout desc_wr_timeout = { 0 }; + struct uvn_queue_err_mask mask = { 0 }; + struct uvn_dif_req_ro_flag flag = { 0 }; + u32 timeout = 119760; /* 200us 200000/1.67 */ + u16 wr_timeout = 0x12c; + u32 quirks; + + nbl_hw_wr32(hw_mgt, NBL_UVN_DESC_RD_WAIT, timeout); + + desc_wr_timeout.num = wr_timeout; + nbl_hw_wr_regs(hw_mgt, NBL_UVN_DESC_WR_TIMEOUT, (u8 *)&desc_wr_timeout, + sizeof(desc_wr_timeout)); + + flag.avail_rd = 1; + flag.desc_rd = 1; + flag.pkt_wr = 1; + flag.desc_wr = 0; + nbl_hw_wr_regs(hw_mgt, NBL_UVN_DIF_REQ_RO_FLAG, (u8 *)&flag, + sizeof(flag)); + + nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_ERR_MASK, (u8 *)&mask, + sizeof(mask)); + mask.dif_err = 1; + nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_ERR_MASK, (u8 *)&mask, + sizeof(mask)); + + prefetch_init.num = NBL_UVN_DESC_PREFETCH_NUM; + prefetch_init.sel = 0; + + quirks = nbl_hw_get_quirks(hw_mgt); + + if (!(quirks & BIT(NBL_QUIRKS_UVN_PREFETCH_ALIGN))) + prefetch_init.sel = 1; + + nbl_hw_wr_regs(hw_mgt, NBL_UVN_DESC_PREFETCH_INIT, (u8 *)&prefetch_init, + sizeof(prefetch_init)); + + return 0; +} + +static int nbl_uqm_init(struct nbl_hw_mgt *hw_mgt) +{ + struct nbl_uqm_que_type que_type = { 0 }; + u32 cnt = 0; + int i; + + nbl_hw_wr_regs(hw_mgt, NBL_UQM_FWD_DROP_CNT, (u8 *)&cnt, sizeof(cnt)); + + nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_PKT_CNT, (u8 *)&cnt, sizeof(cnt)); + nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_PKT_SLICE_CNT, (u8 *)&cnt, + sizeof(cnt)); + nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_PKT_LEN_ADD_CNT, (u8 *)&cnt, + sizeof(cnt)); + nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_HEAD_PNTR_ADD_CNT, (u8 *)&cnt, + sizeof(cnt)); + nbl_hw_wr_regs(hw_mgt, NBL_UQM_DROP_WEIGHT_ADD_CNT, (u8 *)&cnt, + sizeof(cnt)); + + for (i = 0; i < NBL_UQM_PORT_DROP_DEPTH; i++) { + nbl_hw_wr_regs(hw_mgt, + NBL_UQM_PORT_DROP_PKT_CNT + (sizeof(cnt) * i), + (u8 *)&cnt, sizeof(cnt)); + nbl_hw_wr_regs(hw_mgt, + NBL_UQM_PORT_DROP_PKT_SLICE_CNT + + (sizeof(cnt) * i), + (u8 *)&cnt, sizeof(cnt)); + nbl_hw_wr_regs(hw_mgt, + NBL_UQM_PORT_DROP_PKT_LEN_ADD_CNT + + (sizeof(cnt) * i), + (u8 *)&cnt, sizeof(cnt)); + nbl_hw_wr_regs(hw_mgt, + NBL_UQM_PORT_DROP_HEAD_PNTR_ADD_CNT + + (sizeof(cnt) * i), + (u8 *)&cnt, sizeof(cnt)); + nbl_hw_wr_regs(hw_mgt, + NBL_UQM_PORT_DROP_WEIGHT_ADD_CNT + + (sizeof(cnt) * i), + (u8 *)&cnt, sizeof(cnt)); + } + + for (i = 0; i < NBL_UQM_DPORT_DROP_DEPTH; i++) + nbl_hw_wr_regs(hw_mgt, + NBL_UQM_DPORT_DROP_CNT + (sizeof(cnt) * i), + (u8 *)&cnt, sizeof(cnt)); + + que_type.bp_drop = 0; + nbl_hw_wr_regs(hw_mgt, NBL_UQM_QUE_TYPE, (u8 *)&que_type, + sizeof(que_type)); + + return 0; +} + +static int nbl_dp_init(struct nbl_hw_mgt *hw_mgt, u8 speed, u8 eth_num) +{ + nbl_dped_init(hw_mgt); + nbl_uped_init(hw_mgt); + nbl_shaping_init(hw_mgt, speed); + nbl_dsch_qid_max_init(hw_mgt); + nbl_ustore_init(hw_mgt, eth_num); + nbl_dstore_init(hw_mgt, speed); + nbl_ul4s_init(hw_mgt); + nbl_dvn_init(hw_mgt, speed); + nbl_uvn_init(hw_mgt); + nbl_uqm_init(hw_mgt); + + return 0; +} + +static struct nbl_epro_action_filter_tbl + epro_action_filter_tbl[NBL_FWD_TYPE_MAX] = { + [NBL_FWD_TYPE_NORMAL] = { BIT(NBL_MD_ACTION_MCIDX) | + BIT(NBL_MD_ACTION_TABLE_INDEX) | + BIT(NBL_MD_ACTION_MIRRIDX) }, + [NBL_FWD_TYPE_CPU_ASSIGNED] = { BIT(NBL_MD_ACTION_MCIDX) | + BIT(NBL_MD_ACTION_TABLE_INDEX) | + BIT(NBL_MD_ACTION_MIRRIDX) }, + [NBL_FWD_TYPE_UPCALL] = { 0 }, + [NBL_FWD_TYPE_SRC_MIRROR] = { BIT(NBL_MD_ACTION_FLOWID0) | + BIT(NBL_MD_ACTION_FLOWID1) | + BIT(NBL_MD_ACTION_RSSIDX) | + BIT(NBL_MD_ACTION_TABLE_INDEX) | + BIT(NBL_MD_ACTION_MCIDX) | + BIT(NBL_MD_ACTION_VNI0) | + BIT(NBL_MD_ACTION_VNI1) | + BIT(NBL_MD_ACTION_PRBAC_IDX) | + BIT(NBL_MD_ACTION_L4S_IDX) | + BIT(NBL_MD_ACTION_DP_HASH0) | + BIT(NBL_MD_ACTION_DP_HASH1) | + BIT(NBL_MD_ACTION_MDF_PRI) | + BIT(NBL_MD_ACTION_FLOW_CARIDX) | + ((u64)0xffffffff << 32) }, + [NBL_FWD_TYPE_OTHER_MIRROR] = { BIT(NBL_MD_ACTION_FLOWID0) | + BIT(NBL_MD_ACTION_FLOWID1) | + BIT(NBL_MD_ACTION_RSSIDX) | + BIT(NBL_MD_ACTION_TABLE_INDEX) | + BIT(NBL_MD_ACTION_MCIDX) | + BIT(NBL_MD_ACTION_VNI0) | + BIT(NBL_MD_ACTION_VNI1) | + BIT(NBL_MD_ACTION_PRBAC_IDX) | + BIT(NBL_MD_ACTION_L4S_IDX) | + BIT(NBL_MD_ACTION_DP_HASH0) | + BIT(NBL_MD_ACTION_DP_HASH1) | + BIT(NBL_MD_ACTION_MDF_PRI) }, + [NBL_FWD_TYPE_MNG] = { 0 }, + [NBL_FWD_TYPE_GLB_LB] = { 0 }, + [NBL_FWD_TYPE_DROP] = { 0 }, + }; + +static void nbl_epro_action_filter_cfg(struct nbl_hw_mgt *hw_mgt, u32 fwd_type, + struct nbl_epro_action_filter_tbl *cfg) +{ + if (fwd_type >= NBL_FWD_TYPE_MAX) { + pr_err("fwd_type %u exceed the max num %u.", fwd_type, + NBL_FWD_TYPE_MAX); + return; + } + + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_ACTION_FILTER_TABLE(fwd_type), + (u8 *)cfg, sizeof(*cfg)); +} + +static int nbl_epro_init(struct nbl_hw_mgt *hw_mgt) +{ + u32 fwd_type = 0; + + for (fwd_type = 0; fwd_type < NBL_FWD_TYPE_MAX; fwd_type++) + nbl_epro_action_filter_cfg(hw_mgt, fwd_type, + &epro_action_filter_tbl[fwd_type]); + + return 0; +} + +static int nbl_ppe_init(struct nbl_hw_mgt *hw_mgt) +{ + nbl_epro_init(hw_mgt); + + return 0; +} + +static int nbl_host_padpt_init(struct nbl_hw_mgt *hw_mgt) +{ + /* padpt flow control register */ + nbl_hw_wr32(hw_mgt, NBL_HOST_PADPT_HOST_CFG_FC_CPLH_UP, 0x10400); + nbl_hw_wr32(hw_mgt, NBL_HOST_PADPT_HOST_CFG_FC_PD_DN, 0x10080); + nbl_hw_wr32(hw_mgt, NBL_HOST_PADPT_HOST_CFG_FC_PH_DN, 0x10010); + nbl_hw_wr32(hw_mgt, NBL_HOST_PADPT_HOST_CFG_FC_NPH_DN, 0x10010); + + return 0; +} + +/* set padpt debug reg to cap for aged stop */ +static void nbl_host_pcap_init(struct nbl_hw_mgt *hw_mgt) +{ + int addr; + + /* tx */ + nbl_hw_wr32(hw_mgt, 0x15a4204, 0x4); + nbl_hw_wr32(hw_mgt, 0x15a4208, 0x10); + + for (addr = 0x15a4300; addr <= 0x15a4338; addr += 4) + nbl_hw_wr32(hw_mgt, addr, 0x0); + nbl_hw_wr32(hw_mgt, 0x15a433c, 0xdf000000); + + for (addr = 0x15a4340; addr <= 0x15a437c; addr += 4) + nbl_hw_wr32(hw_mgt, addr, 0x0); + + /* rx */ + nbl_hw_wr32(hw_mgt, 0x15a4804, 0x4); + nbl_hw_wr32(hw_mgt, 0x15a4808, 0x20); + + for (addr = 0x15a4940; addr <= 0x15a4978; addr += 4) + nbl_hw_wr32(hw_mgt, addr, 0x0); + nbl_hw_wr32(hw_mgt, 0x15a497c, 0x0a000000); + + for (addr = 0x15a4900; addr <= 0x15a4938; addr += 4) + nbl_hw_wr32(hw_mgt, addr, 0x0); + nbl_hw_wr32(hw_mgt, 0x15a493c, 0xbe000000); + + nbl_hw_wr32(hw_mgt, 0x15a420c, 0x1); + nbl_hw_wr32(hw_mgt, 0x15a480c, 0x1); + nbl_hw_wr32(hw_mgt, 0x15a420c, 0x0); + nbl_hw_wr32(hw_mgt, 0x15a480c, 0x0); + nbl_hw_wr32(hw_mgt, 0x15a4200, 0x1); + nbl_hw_wr32(hw_mgt, 0x15a4800, 0x1); +} + +static int nbl_intf_init(struct nbl_hw_mgt *hw_mgt) +{ + nbl_host_padpt_init(hw_mgt); + nbl_host_pcap_init(hw_mgt); + + return 0; +} + +static void nbl_hw_set_driver_status(struct nbl_hw_mgt *hw_mgt, bool active) +{ + u32 status = 0; + + status = nbl_hw_rd32(hw_mgt, NBL_DRIVER_STATUS_REG); + + status = (status & ~(1 << NBL_DRIVER_STATUS_BIT)) | + (active << NBL_DRIVER_STATUS_BIT); + + nbl_hw_wr32(hw_mgt, NBL_DRIVER_STATUS_REG, status); +} + +static void nbl_hw_deinit_chip_module(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + nbl_hw_set_driver_status(hw_mgt, false); +} + +static int nbl_hw_init_chip_module(void *priv, u8 eth_speed, u8 eth_num) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + nbl_debug(NBL_HW_MGT_TO_COMMON(hw_mgt), "hw_chip_init"); + + nbl_dp_init(hw_mgt, eth_speed, eth_num); + nbl_ppe_init(hw_mgt); + nbl_intf_init(hw_mgt); + + nbl_write_all_regs(hw_mgt); + nbl_hw_set_driver_status(hw_mgt, true); + hw_mgt->version = nbl_hw_rd32(hw_mgt, NBL_HW_DUMMY_REG); + + return 0; +} + +static int nbl_hw_init_qid_map_table(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_virtio_qid_map_table info = { 0 }, info2 = { 0 }; + struct device *dev = NBL_HW_MGT_TO_DEV(hw_mgt); + u64 reg; + u16 i, j, k; + + memset(&info, 0, sizeof(info)); + info.local_qid = 0x1FF; + info.notify_addr_l = 0x7FFFFF; + info.notify_addr_h = 0xFFFFFFFF; + info.global_qid = 0xFFF; + info.ctrlq_flag = 0X1; + info.rsv1 = 0; + info.rsv2 = 0; + + for (k = 0; k < 2; k++) { /* 0 is primary table , 1 is standby table */ + for (i = 0; i < NBL_QID_MAP_TABLE_ENTRIES; i++) { + j = 0; + do { + reg = NBL_PCOMPLETER_QID_MAP_REG_ARR(k, i); + nbl_hw_wr_regs(hw_mgt, reg, (u8 *)&info, + sizeof(info)); + nbl_hw_rd_regs(hw_mgt, reg, (u8 *)&info2, + sizeof(info2)); + if (likely(!memcmp(&info, &info2, + sizeof(info)))) + break; + j++; + } while (j < NBL_REG_WRITE_MAX_TRY_TIMES); + + if (j == NBL_REG_WRITE_MAX_TRY_TIMES) + dev_err(dev, + "Write to qid map table entry %hu failed\n", + i); + } + } + + return 0; +} + +static int nbl_hw_set_qid_map_table(void *priv, void *data, int qid_map_select) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct nbl_qid_map_param *param = (struct nbl_qid_map_param *)data; + struct nbl_virtio_qid_map_table info = { 0 }, info_data = { 0 }; + struct nbl_queue_table_select select = { 0 }; + u64 reg; + int i, j; + + if (hw_mgt->hw_status) + return 0; + + for (i = 0; i < param->len; i++) { + j = 0; + + info.local_qid = param->qid_map[i].local_qid; + info.notify_addr_l = param->qid_map[i].notify_addr_l; + info.notify_addr_h = param->qid_map[i].notify_addr_h; + info.global_qid = param->qid_map[i].global_qid; + info.ctrlq_flag = param->qid_map[i].ctrlq_flag; + + do { + reg = NBL_PCOMPLETER_QID_MAP_REG_ARR(qid_map_select, + param->start + i); + nbl_hw_wr_regs(hw_mgt, reg, (u8 *)(&info), + sizeof(info)); + nbl_hw_rd_regs(hw_mgt, reg, (u8 *)(&info_data), + sizeof(info_data)); + if (likely(!memcmp(&info, &info_data, sizeof(info)))) + break; + j++; + } while (j < NBL_REG_WRITE_MAX_TRY_TIMES); + + if (j == NBL_REG_WRITE_MAX_TRY_TIMES) + nbl_err(common, + "Write to qid map table entry %d failed\n", + param->start + i); + } + + select.select = qid_map_select; + nbl_hw_wr_regs(hw_mgt, NBL_PCOMPLETER_QUEUE_TABLE_SELECT_REG, + (u8 *)&select, sizeof(select)); + + return 0; +} + +static int nbl_hw_set_qid_map_ready(void *priv, bool ready) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_queue_table_ready queue_table_ready = { 0 }; + + queue_table_ready.ready = ready; + nbl_hw_wr_regs(hw_mgt, NBL_PCOMPLETER_QUEUE_TABLE_READY_REG, + (u8 *)&queue_table_ready, sizeof(queue_table_ready)); + + return 0; +} + +static int nbl_hw_cfg_ipro_queue_tbl(void *priv, u16 queue_id, u16 vsi_id, + u8 enable) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_ipro_queue_tbl ipro_queue_tbl = { 0 }; + + ipro_queue_tbl.vsi_en = enable; + ipro_queue_tbl.vsi_id = vsi_id; + + nbl_hw_wr_regs(hw_mgt, NBL_IPRO_QUEUE_TBL(queue_id), + (u8 *)&ipro_queue_tbl, sizeof(ipro_queue_tbl)); + + return 0; +} + +static int nbl_hw_cfg_ipro_dn_sport_tbl(void *priv, u16 vsi_id, u16 dst_eth_id, + u16 bmode, bool binit) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_ipro_dn_src_port_tbl dpsport = { 0 }; + + if (binit) { + dpsport.entry_vld = 1; + dpsport.hw_flow = 1; + dpsport.set_dport.dport.down.upcall_flag = AUX_FWD_TYPE_NML_FWD; + dpsport.set_dport.dport.down.port_type = SET_DPORT_TYPE_ETH_LAG; + dpsport.set_dport.dport.down.lag_vld = 0; + dpsport.set_dport.dport.down.eth_vld = 1; + dpsport.set_dport.dport.down.eth_id = dst_eth_id; + dpsport.vlan_layer_num_1 = 3; + dpsport.set_dport_en = 1; + } else { + nbl_hw_rd_regs(hw_mgt, NBL_IPRO_DN_SRC_PORT_TABLE(vsi_id), + (u8 *)&dpsport, + sizeof(struct nbl_ipro_dn_src_port_tbl)); + } + + if (bmode == BRIDGE_MODE_VEPA) + dpsport.set_dport.dport.down.next_stg_sel = NEXT_STG_SEL_EPRO; + else + dpsport.set_dport.dport.down.next_stg_sel = NEXT_STG_SEL_NONE; + + nbl_hw_wr_regs(hw_mgt, NBL_IPRO_DN_SRC_PORT_TABLE(vsi_id), + (u8 *)&dpsport, sizeof(struct nbl_ipro_dn_src_port_tbl)); + + return 0; +} + +static int nbl_hw_set_vnet_queue_info(void *priv, + struct nbl_vnet_queue_info_param *param, + u16 queue_id) +{ + struct nbl_hw_mgt_leonis *hw_mgt_leonis = + (struct nbl_hw_mgt_leonis *)priv; + struct nbl_hw_mgt *hw_mgt = &hw_mgt_leonis->hw_mgt; + struct nbl_host_vnet_qinfo host_vnet_qinfo = { 0 }; + + host_vnet_qinfo.function_id = param->function_id; + host_vnet_qinfo.device_id = param->device_id; + host_vnet_qinfo.bus_id = param->bus_id; + host_vnet_qinfo.valid = param->valid; + host_vnet_qinfo.msix_idx = param->msix_idx; + host_vnet_qinfo.msix_idx_valid = param->msix_idx_valid; + + if (hw_mgt_leonis->ro_enable) { + host_vnet_qinfo.ido_en = 1; + host_vnet_qinfo.rlo_en = 1; + } + + nbl_hw_wr_regs(hw_mgt, NBL_PADPT_HOST_VNET_QINFO_REG_ARR(queue_id), + (u8 *)&host_vnet_qinfo, sizeof(host_vnet_qinfo)); + + return 0; +} + +static int nbl_hw_clear_vnet_queue_info(void *priv, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_host_vnet_qinfo host_vnet_qinfo = { 0 }; + + nbl_hw_wr_regs(hw_mgt, NBL_PADPT_HOST_VNET_QINFO_REG_ARR(queue_id), + (u8 *)&host_vnet_qinfo, sizeof(host_vnet_qinfo)); + return 0; +} + +static int nbl_hw_reset_dvn_cfg(void *priv, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct nbl_dvn_queue_reset queue_reset = { 0 }; + struct nbl_dvn_queue_reset_done queue_reset_done = { 0 }; + int i = 0; + + queue_reset.dvn_queue_index = queue_id; + queue_reset.vld = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DVN_QUEUE_RESET_REG, (u8 *)&queue_reset, + sizeof(queue_reset)); + + udelay(5); + nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_RESET_DONE_REG, + (u8 *)&queue_reset_done, sizeof(queue_reset_done)); + while (!queue_reset_done.flag) { + i++; + if (!(i % 10)) { + nbl_err(common, + "Wait too long for tx queue reset to be done"); + break; + } + + udelay(5); + nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_RESET_DONE_REG, + (u8 *)&queue_reset_done, + sizeof(queue_reset_done)); + } + + nbl_debug(common, "dvn:%u cfg reset succedd, wait %d 5ns\n", queue_id, + i); + return 0; +} + +static int nbl_hw_reset_uvn_cfg(void *priv, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct nbl_uvn_queue_reset queue_reset = { 0 }; + struct nbl_uvn_queue_reset_done queue_reset_done = { 0 }; + int i = 0; + + queue_reset.index = queue_id; + queue_reset.vld = 1; + nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_RESET_REG, (u8 *)&queue_reset, + sizeof(queue_reset)); + + udelay(5); + nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_RESET_DONE_REG, + (u8 *)&queue_reset_done, sizeof(queue_reset_done)); + while (!queue_reset_done.flag) { + i++; + if (!(i % 10)) { + nbl_err(common, + "Wait too long for rx queue reset to be done"); + break; + } + + udelay(5); + nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_RESET_DONE_REG, + (u8 *)&queue_reset_done, + sizeof(queue_reset_done)); + } + + nbl_debug(common, "uvn:%u cfg reset succedd, wait %d 5ns\n", queue_id, + i); + return 0; +} + +static int nbl_hw_restore_dvn_context(void *priv, u16 queue_id, u16 split, + u16 last_avail_index) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct dvn_queue_context cxt = { 0 }; + + cxt.dvn_ring_wrap_counter = last_avail_index >> 15; + if (split) + cxt.dvn_avail_ring_read = last_avail_index; + else + cxt.dvn_l1_ring_read = last_avail_index & 0x7FFF; + + nbl_hw_wr_regs(hw_mgt, NBL_DVN_QUEUE_CXT_TABLE_ARR(queue_id), + (u8 *)&cxt, sizeof(cxt)); + nbl_info(common, "config tx ring: %u, last avail idx: %u\n", queue_id, + last_avail_index); + + return 0; +} + +static int nbl_hw_restore_uvn_context(void *priv, u16 queue_id, u16 split, + u16 last_avail_index) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct uvn_queue_cxt cxt = { 0 }; + + cxt.wrap_count = last_avail_index >> 15; + if (split) + cxt.queue_head = last_avail_index; + else + cxt.queue_head = last_avail_index & 0x7FFF; + + nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_CXT_TABLE_ARR(queue_id), + (u8 *)&cxt, sizeof(cxt)); + nbl_info(common, "config rx ring: %u, last avail idx: %u\n", queue_id, + last_avail_index); + + return 0; +} + +static int nbl_hw_get_tx_queue_cfg(void *priv, void *data, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_queue_cfg_param *queue_cfg = + (struct nbl_queue_cfg_param *)data; + struct dvn_queue_table info = { 0 }; - nbl_hw_read_mbx_regs(hw_mgt, NBL_LEONIS_QUIRKS_OFFSET, (u8 *)&quirks, - sizeof(u32)); + nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info, + sizeof(info)); - if (quirks == NBL_LEONIS_ILLEGAL_REG_VALUE) - return 0; + queue_cfg->desc = info.dvn_queue_baddr; + queue_cfg->avail = info.dvn_avail_baddr; + queue_cfg->used = info.dvn_used_baddr; + queue_cfg->size = info.dvn_queue_size; + queue_cfg->split = info.dvn_queue_type; + queue_cfg->extend_header = info.dvn_extend_header_en; - return quirks; + return 0; +} + +static int nbl_hw_get_rx_queue_cfg(void *priv, void *data, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_queue_cfg_param *queue_cfg = + (struct nbl_queue_cfg_param *)data; + struct uvn_queue_table info = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info, + sizeof(info)); + + queue_cfg->desc = info.queue_baddr; + queue_cfg->avail = info.avail_baddr; + queue_cfg->used = info.used_baddr; + queue_cfg->size = info.queue_size_mask_pow; + queue_cfg->split = info.queue_type; + queue_cfg->extend_header = info.extend_header_en; + queue_cfg->half_offload_en = info.half_offload_en; + queue_cfg->rxcsum = info.guest_csum_en; + + return 0; +} + +static int nbl_hw_cfg_tx_queue(void *priv, void *data, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_queue_cfg_param *queue_cfg = + (struct nbl_queue_cfg_param *)data; + struct dvn_queue_table info = { 0 }; + + info.dvn_queue_baddr = queue_cfg->desc; + if (!queue_cfg->split && !queue_cfg->extend_header) + queue_cfg->avail = queue_cfg->avail | 3; + info.dvn_avail_baddr = queue_cfg->avail; + info.dvn_used_baddr = queue_cfg->used; + info.dvn_queue_size = ilog2(queue_cfg->size); + info.dvn_queue_type = queue_cfg->split; + info.dvn_queue_en = 1; + info.dvn_extend_header_en = queue_cfg->extend_header; + + nbl_hw_wr_regs(hw_mgt, NBL_DVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info, + sizeof(info)); + + return 0; +} + +static int nbl_hw_cfg_rx_queue(void *priv, void *data, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_queue_cfg_param *queue_cfg = + (struct nbl_queue_cfg_param *)data; + struct uvn_queue_table info = { 0 }; + + info.queue_baddr = queue_cfg->desc; + info.avail_baddr = queue_cfg->avail; + info.used_baddr = queue_cfg->used; + info.queue_size_mask_pow = ilog2(queue_cfg->size); + info.queue_type = queue_cfg->split; + info.extend_header_en = queue_cfg->extend_header; + info.half_offload_en = queue_cfg->half_offload_en; + info.guest_csum_en = queue_cfg->rxcsum; + info.queue_enable = 1; + + nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info, + sizeof(info)); + + return 0; +} + +static bool nbl_hw_check_q2tc(void *priv, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct dsch_vn_q2tc_cfg_tbl info; + + nbl_hw_rd_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id), + (u8 *)&info, sizeof(info)); + return info.vld; +} + +static int nbl_hw_cfg_q2tc_netid(void *priv, u16 queue_id, u16 netid, u16 vld) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct dsch_vn_q2tc_cfg_tbl info; + + nbl_hw_rd_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id), + (u8 *)&info, sizeof(info)); + info.tcid = (info.tcid & 0x7) | (netid << 3); + info.vld = vld; + + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id), + (u8 *)&info, sizeof(info)); + return 0; +} + +static void nbl_hw_active_shaping(void *priv, u16 func_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_shaping_net shaping_net = { 0 }; + struct dsch_vn_sha2net_map_tbl sha2net = { 0 }; + struct dsch_vn_net2sha_map_tbl net2sha = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net, + sizeof(shaping_net)); + + if (!shaping_net.depth) + return; + + sha2net.vld = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_SHA2NET_MAP_TABLE_REG_ARR(func_id), + (u8 *)&sha2net, sizeof(sha2net)); + + shaping_net.valid = 1; + nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net, + sizeof(shaping_net)); + + net2sha.vld = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_NET2SHA_MAP_TABLE_REG_ARR(func_id), + (u8 *)&net2sha, sizeof(net2sha)); +} + +static void nbl_hw_deactive_shaping(void *priv, u16 func_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_shaping_net shaping_net = { 0 }; + struct dsch_vn_sha2net_map_tbl sha2net = { 0 }; + struct dsch_vn_net2sha_map_tbl net2sha = { 0 }; + + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_NET2SHA_MAP_TABLE_REG_ARR(func_id), + (u8 *)&net2sha, sizeof(net2sha)); + + nbl_hw_rd_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net, + sizeof(shaping_net)); + shaping_net.valid = 0; + nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net, + sizeof(shaping_net)); + + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_SHA2NET_MAP_TABLE_REG_ARR(func_id), + (u8 *)&sha2net, sizeof(sha2net)); +} + +static int nbl_hw_set_shaping(void *priv, u16 func_id, u64 total_tx_rate, + u64 burst, u8 vld, bool active) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_shaping_net shaping_net = { 0 }; + struct dsch_vn_sha2net_map_tbl sha2net = { 0 }; + struct dsch_vn_net2sha_map_tbl net2sha = { 0 }; + + if (vld) { + sha2net.vld = active; + nbl_hw_wr_regs(hw_mgt, + NBL_DSCH_VN_SHA2NET_MAP_TABLE_REG_ARR(func_id), + (u8 *)&sha2net, sizeof(sha2net)); + } else { + net2sha.vld = vld; + nbl_hw_wr_regs(hw_mgt, + NBL_DSCH_VN_NET2SHA_MAP_TABLE_REG_ARR(func_id), + (u8 *)&net2sha, sizeof(net2sha)); + } + + /* cfg shaping cir/pir */ + if (vld) { + shaping_net.valid = active; + /* total_tx_rate unit Mb/s */ + /* cir 1 default represents 1Mbps */ + shaping_net.cir = total_tx_rate; + /* pir equal cir */ + shaping_net.pir = shaping_net.cir; + if (burst) + shaping_net.depth = burst; + else + shaping_net.depth = max(shaping_net.cir * 2, + NBL_LR_LEONIS_NET_BUCKET_DEPTH); + shaping_net.cbs = shaping_net.depth; + shaping_net.pbs = shaping_net.depth; + } + + nbl_hw_wr_regs(hw_mgt, NBL_SHAPING_NET(func_id), (u8 *)&shaping_net, + sizeof(shaping_net)); + + if (!vld) { + sha2net.vld = vld; + nbl_hw_wr_regs(hw_mgt, + NBL_DSCH_VN_SHA2NET_MAP_TABLE_REG_ARR(func_id), + (u8 *)&sha2net, sizeof(sha2net)); + } else { + net2sha.vld = active; + nbl_hw_wr_regs(hw_mgt, + NBL_DSCH_VN_NET2SHA_MAP_TABLE_REG_ARR(func_id), + (u8 *)&net2sha, sizeof(net2sha)); + } + + return 0; +} + +static int nbl_hw_set_ucar(void *priv, u16 vsi_id, u64 totel_rx_rate, u64 burst, + u8 vld) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + union ucar_flow_u ucar_flow = { .info = { 0 } }; + union epro_vpt_u epro_vpt = { .info = { 0 } }; + int car_id = 0; + int index = 0; + + nbl_hw_rd_regs(hw_mgt, NBL_EPRO_VPT_REG(vsi_id), (u8 *)&epro_vpt, + sizeof(epro_vpt)); + if (vld) { + if (epro_vpt.info.car_en) { + car_id = epro_vpt.info.car_id; + } else { + epro_vpt.info.car_en = 1; + for (; index < 1024; index++) { + nbl_hw_rd_regs(hw_mgt, NBL_UCAR_FLOW_REG(index), + (u8 *)&ucar_flow, + sizeof(ucar_flow)); + if (ucar_flow.info.valid == 0) { + car_id = index; + break; + } + } + if (car_id == 1024) { + nbl_err(common, + "Car ID exceeds the valid range!"); + return -ENOMEM; + } + epro_vpt.info.car_id = car_id; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_VPT_REG(vsi_id), + (u8 *)&epro_vpt, sizeof(epro_vpt)); + } + } else { + epro_vpt.info.car_en = 0; + car_id = epro_vpt.info.car_id; + epro_vpt.info.car_id = 0; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_VPT_REG(vsi_id), + (u8 *)&epro_vpt, sizeof(epro_vpt)); + } + + if (vld) { + ucar_flow.info.valid = 1; + ucar_flow.info.cir = totel_rx_rate; + ucar_flow.info.pir = totel_rx_rate; + if (burst) + ucar_flow.info.depth = burst; + else + ucar_flow.info.depth = NBL_UCAR_MAX_BUCKET_DEPTH; + ucar_flow.info.cbs = ucar_flow.info.depth; + ucar_flow.info.pbs = ucar_flow.info.depth; + } + nbl_hw_wr_regs(hw_mgt, NBL_UCAR_FLOW_REG(car_id), (u8 *)&ucar_flow, + sizeof(ucar_flow)); + + return 0; +} + +static int nbl_hw_cfg_dsch_net_to_group(void *priv, u16 func_id, u16 group_id, + u16 vld) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct dsch_vn_n2g_cfg_tbl info = { 0 }; + + info.grpid = group_id; + info.vld = vld; + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_N2G_CFG_TABLE_REG_ARR(func_id), + (u8 *)&info, sizeof(info)); + return 0; +} + +static int nbl_hw_cfg_epro_rss_ret(void *priv, u32 index, u8 size_type, + u32 q_num, u16 *queue_list, const u32 *indir) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct nbl_epro_rss_ret_tbl rss_ret = { 0 }; + u32 table_id, table_end, group_count, odd_num, queue_id = 0; + + group_count = NBL_EPRO_RSS_ENTRY_SIZE_UNIT << size_type; + if (group_count > NBL_EPRO_RSS_ENTRY_MAX_COUNT) { + nbl_err(common, + "Rss group entry size type %u exceed the max value %u", + size_type, NBL_EPRO_RSS_ENTRY_SIZE_256); + return -EINVAL; + } + + if (q_num > group_count) { + nbl_err(common, "q_num %u exceed the rss group count %u\n", + q_num, group_count); + return -EINVAL; + } + if (index >= NBL_EPRO_RSS_RET_TBL_DEPTH || + (index + group_count) > NBL_EPRO_RSS_RET_TBL_DEPTH) { + nbl_err(common, + "index %u exceed the max table entry %u, entry size: %u\n", + index, NBL_EPRO_RSS_RET_TBL_DEPTH, group_count); + return -EINVAL; + } + + table_id = index / 2; + table_end = (index + group_count) / 2; + odd_num = index % 2; + nbl_hw_rd_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), (u8 *)&rss_ret, + sizeof(rss_ret)); + + if (indir) { + if (odd_num) { + rss_ret.vld1 = 1; + rss_ret.dqueue1 = indir[queue_id++]; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), + (u8 *)&rss_ret, sizeof(rss_ret)); + table_id++; + } + + for (; table_id < table_end; table_id++) { + rss_ret.vld0 = 1; + rss_ret.dqueue0 = indir[queue_id++]; + rss_ret.vld1 = 1; + rss_ret.dqueue1 = indir[queue_id++]; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), + (u8 *)&rss_ret, sizeof(rss_ret)); + } + + nbl_hw_rd_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), + (u8 *)&rss_ret, sizeof(rss_ret)); + + if (odd_num) { + rss_ret.vld0 = 1; + rss_ret.dqueue0 = indir[queue_id++]; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), + (u8 *)&rss_ret, sizeof(rss_ret)); + } + } else { + if (odd_num) { + rss_ret.vld1 = 1; + rss_ret.dqueue1 = queue_list[queue_id++]; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), + (u8 *)&rss_ret, sizeof(rss_ret)); + table_id++; + } + + queue_id = queue_id % q_num; + for (; table_id < table_end; table_id++) { + rss_ret.vld0 = 1; + rss_ret.dqueue0 = queue_list[queue_id++]; + queue_id = queue_id % q_num; + rss_ret.vld1 = 1; + rss_ret.dqueue1 = queue_list[queue_id++]; + queue_id = queue_id % q_num; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), + (u8 *)&rss_ret, sizeof(rss_ret)); + } + + nbl_hw_rd_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), + (u8 *)&rss_ret, sizeof(rss_ret)); + + if (odd_num) { + rss_ret.vld0 = 1; + rss_ret.dqueue0 = queue_list[queue_id++]; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_RET_TABLE(table_id), + (u8 *)&rss_ret, sizeof(rss_ret)); + } + } + + return 0; +} + +static struct nbl_epro_rss_key epro_rss_key_def = { + .key0 = 0x6d5a6d5a6d5a6d5a, + .key1 = 0x6d5a6d5a6d5a6d5a, + .key2 = 0x6d5a6d5a6d5a6d5a, + .key3 = 0x6d5a6d5a6d5a6d5a, + .key4 = 0x6d5a6d5a6d5a6d5a, +}; + +static int nbl_hw_init_epro_rss_key(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_KEY_REG, (u8 *)&epro_rss_key_def, + sizeof(epro_rss_key_def)); + + return 0; +} + +static int nbl_hw_init_epro_vpt_tbl(void *priv, u16 vsi_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_epro_vpt_tbl epro_vpt_tbl = { 0 }; + + epro_vpt_tbl.vld = 1; + epro_vpt_tbl.fwd = NBL_EPRO_FWD_TYPE_DROP; + epro_vpt_tbl.rss_alg_sel = NBL_EPRO_RSS_ALG_TOEPLITZ_HASH; + epro_vpt_tbl.rss_key_type_ipv4 = NBL_EPRO_RSS_KEY_TYPE_IPV4_L4; + epro_vpt_tbl.rss_key_type_ipv6 = NBL_EPRO_RSS_KEY_TYPE_IPV6_L4; + + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl, + sizeof(struct nbl_epro_vpt_tbl)); + + return 0; +} + +static int nbl_hw_set_epro_rss_pt(void *priv, u16 vsi_id, u16 rss_ret_base, + u16 rss_entry_size) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_epro_rss_pt_tbl epro_rss_pt_tbl = { 0 }; + struct nbl_epro_vpt_tbl epro_vpt_tbl; + u16 entry_size; + + if (rss_entry_size > NBL_EPRO_RSS_ENTRY_MAX_SIZE) + entry_size = NBL_EPRO_RSS_ENTRY_MAX_SIZE; + else + entry_size = rss_entry_size; + + epro_rss_pt_tbl.vld = 1; + epro_rss_pt_tbl.entry_size = entry_size; + epro_rss_pt_tbl.offset0_vld = 1; + epro_rss_pt_tbl.offset0 = rss_ret_base; + if (rss_entry_size > NBL_EPRO_RSS_ENTRY_MAX_SIZE) { + epro_rss_pt_tbl.offset1_vld = 1; + epro_rss_pt_tbl.offset1 = + rss_ret_base + + (NBL_EPRO_RSS_ENTRY_SIZE_UNIT << entry_size); + } else { + epro_rss_pt_tbl.offset1_vld = 0; + epro_rss_pt_tbl.offset1 = 0; + } + + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_PT_TABLE(vsi_id), + (u8 *)&epro_rss_pt_tbl, sizeof(epro_rss_pt_tbl)); + + nbl_hw_rd_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl, + sizeof(epro_vpt_tbl)); + epro_vpt_tbl.fwd = NBL_EPRO_FWD_TYPE_NORMAL; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl, + sizeof(epro_vpt_tbl)); + + return 0; +} + +static int nbl_hw_clear_epro_rss_pt(void *priv, u16 vsi_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_epro_rss_pt_tbl epro_rss_pt_tbl = { 0 }; + struct nbl_epro_vpt_tbl epro_vpt_tbl; + + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_RSS_PT_TABLE(vsi_id), + (u8 *)&epro_rss_pt_tbl, sizeof(epro_rss_pt_tbl)); + + nbl_hw_rd_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl, + sizeof(epro_vpt_tbl)); + epro_vpt_tbl.fwd = NBL_EPRO_FWD_TYPE_DROP; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_VPT_TABLE(vsi_id), (u8 *)&epro_vpt_tbl, + sizeof(epro_vpt_tbl)); + + return 0; +} + +static int nbl_hw_disable_dvn(void *priv, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct dvn_queue_table info = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info, + sizeof(info)); + info.dvn_queue_en = 0; + nbl_hw_wr_regs(hw_mgt, NBL_DVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info, + sizeof(info)); + return 0; +} + +static int nbl_hw_disable_uvn(void *priv, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct uvn_queue_table info = { 0 }; + + nbl_hw_wr_regs(hw_mgt, NBL_UVN_QUEUE_TABLE_ARR(queue_id), (u8 *)&info, + sizeof(info)); + return 0; +} + +static bool nbl_hw_is_txq_drain_out(struct nbl_hw_mgt *hw_mgt, u16 queue_id, + struct dsch_vn_tc_q_list_tbl *tc_q_list) +{ + nbl_hw_rd_regs(hw_mgt, NBL_DSCH_VN_TC_Q_LIST_TABLE_REG_ARR(queue_id), + (u8 *)tc_q_list, sizeof(*tc_q_list)); + if (!tc_q_list->regi && !tc_q_list->fly) + return true; + + return false; +} + +static bool nbl_hw_is_rxq_drain_out(struct nbl_hw_mgt *hw_mgt, u16 queue_id) +{ + struct uvn_desc_cxt cache_ctx = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_UVN_DESC_CXT_TABLE_ARR(queue_id), + (u8 *)&cache_ctx, sizeof(cache_ctx)); + if (cache_ctx.cache_pref_num_prev == cache_ctx.cache_pref_num_post) + return true; + + return false; +} + +static int nbl_hw_lso_dsch_drain(void *priv, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct dsch_vn_tc_q_list_tbl tc_q_list = { 0 }; + struct dsch_vn_q2tc_cfg_tbl info; + int i = 0; + + nbl_hw_rd_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id), + (u8 *)&info, sizeof(info)); + info.vld = 0; + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_Q2TC_CFG_TABLE_REG_ARR(queue_id), + (u8 *)&info, sizeof(info)); + do { + if (nbl_hw_is_txq_drain_out(hw_mgt, queue_id, &tc_q_list)) + break; + + usleep_range(10, 20); + } while (++i < NBL_DRAIN_WAIT_TIMES); + + if (i >= NBL_DRAIN_WAIT_TIMES) { + nbl_err(common, + "nbl queue %u lso dsch drain, regi %u, fly %u, vld %u\n", + queue_id, tc_q_list.regi, tc_q_list.fly, tc_q_list.vld); + return -1; + } + + return 0; +} + +static int nbl_hw_rsc_cache_drain(void *priv, u16 queue_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + int i = 0; + + do { + if (nbl_hw_is_rxq_drain_out(hw_mgt, queue_id)) + break; + + usleep_range(10, 20); + } while (++i < NBL_DRAIN_WAIT_TIMES); + + if (i >= NBL_DRAIN_WAIT_TIMES) { + nbl_err(common, "nbl queue %u rsc cache drain timeout\n", + queue_id); + return -1; + } + + return 0; +} + +static u16 nbl_hw_save_dvn_ctx(void *priv, u16 queue_id, u16 split) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct dvn_queue_context dvn_ctx = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_DVN_QUEUE_CXT_TABLE_ARR(queue_id), + (u8 *)&dvn_ctx, sizeof(dvn_ctx)); + + nbl_debug(common, "DVNQ save ctx: %d packed: %08x %08x split: %08x\n", + queue_id, dvn_ctx.dvn_ring_wrap_counter, + dvn_ctx.dvn_l1_ring_read, dvn_ctx.dvn_avail_ring_idx); + + if (split) + return (dvn_ctx.dvn_avail_ring_idx); + else + return (dvn_ctx.dvn_l1_ring_read & 0x7FFF) | + (dvn_ctx.dvn_ring_wrap_counter << 15); +} + +static u16 nbl_hw_save_uvn_ctx(void *priv, u16 queue_id, u16 split, + u16 queue_size) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common = NBL_HW_MGT_TO_COMMON(hw_mgt); + struct uvn_queue_cxt queue_cxt = { 0 }; + struct uvn_desc_cxt desc_cxt = { 0 }; + u16 cache_diff, queue_head, wrap_count; + + nbl_hw_rd_regs(hw_mgt, NBL_UVN_QUEUE_CXT_TABLE_ARR(queue_id), + (u8 *)&queue_cxt, sizeof(queue_cxt)); + nbl_hw_rd_regs(hw_mgt, NBL_UVN_DESC_CXT_TABLE_ARR(queue_id), + (u8 *)&desc_cxt, sizeof(desc_cxt)); + + nbl_debug(common, + "UVN save ctx: %d cache_tail: %08x cache_head %08x queue_head: %08x\n", + queue_id, desc_cxt.cache_tail, desc_cxt.cache_head, + queue_cxt.queue_head); + + cache_diff = (desc_cxt.cache_tail - desc_cxt.cache_head + 64) & (0x3F); + queue_head = (queue_cxt.queue_head - cache_diff + 65536) & (0xFFFF); + if (queue_size) + wrap_count = !((queue_head / queue_size) & 0x1); + else + return 0xffff; + + nbl_debug(common, "UVN save ctx: %d packed: %08x %08x split: %08x\n", + queue_id, wrap_count, queue_head, queue_head); + + if (split) + return (queue_head); + else + return (queue_head & 0x7FFF) | (wrap_count << 15); +} + +static void nbl_hw_setup_queue_switch(void *priv, u16 eth_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_ipro_upsport_tbl upsport = { 0 }; + struct nbl_epro_ept_tbl ept_tbl = { 0 }; + struct dsch_vn_g2p_cfg_tbl info = { 0 }; + + upsport.hw_flow = 1; + upsport.entry_vld = 1; + upsport.set_dport_en = 1; + upsport.set_dport_pri = 0; + upsport.vlan_layer_num_0 = 3; + upsport.vlan_layer_num_1 = 3; + /* default we close promisc */ + upsport.set_dport.data = 0xFFF; + + ept_tbl.vld = 1; + ept_tbl.fwd = 1; + + info.vld = 1; + info.port = (eth_id << 1); + + nbl_hw_wr_regs(hw_mgt, NBL_IPRO_UP_SPORT_TABLE(eth_id), (u8 *)&upsport, + sizeof(upsport)); + + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_EPT_TABLE(eth_id), (u8 *)&ept_tbl, + sizeof(struct nbl_epro_ept_tbl)); + + nbl_hw_wr_regs(hw_mgt, NBL_DSCH_VN_G2P_CFG_TABLE_REG_ARR(eth_id), + (u8 *)&info, sizeof(info)); +} + +static void nbl_hw_init_pfc(void *priv, u8 ether_ports) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_dqm_rxmac_tx_port_bp_en_cfg dqm_port_bp_en = { 0 }; + struct nbl_dqm_rxmac_tx_cos_bp_en_cfg dqm_cos_bp_en = { 0 }; + struct nbl_uqm_rx_cos_bp_en_cfg uqm_rx_cos_bp_en = { 0 }; + struct nbl_uqm_tx_cos_bp_en_cfg uqm_tx_cos_bp_en = { 0 }; + struct nbl_ustore_port_fc_th ustore_port_fc_th = { 0 }; + struct nbl_ustore_cos_fc_th ustore_cos_fc_th = { 0 }; + struct nbl_epro_port_pri_mdf_en_cfg pri_mdf_en_cfg = { 0 }; + struct nbl_epro_cos_map cos_map = { 0 }; + struct nbl_upa_pri_sel_conf sel_conf = { 0 }; + struct nbl_upa_pri_conf conf_table = { 0 }; + int i, j; + + /* DQM */ + /* set default bp_mode: port */ + /* TX bp: dqm send received ETH RX Pause to DSCH */ + /* dqm rxmac_tx_port_bp_en */ + dqm_port_bp_en.eth0 = 1; + dqm_port_bp_en.eth1 = 1; + dqm_port_bp_en.eth2 = 1; + dqm_port_bp_en.eth3 = 1; + nbl_hw_wr_regs(hw_mgt, NBL_DQM_RXMAC_TX_PORT_BP_EN, + (u8 *)(&dqm_port_bp_en), sizeof(dqm_port_bp_en)); + + /* TX bp: dqm donot send received ETH RX PFC to DSCH */ + /* dqm rxmac_tx_cos_bp_en */ + dqm_cos_bp_en.eth0 = 0; + dqm_cos_bp_en.eth1 = 0; + dqm_cos_bp_en.eth2 = 0; + dqm_cos_bp_en.eth3 = 0; + nbl_hw_wr_regs(hw_mgt, NBL_DQM_RXMAC_TX_COS_BP_EN, + (u8 *)(&dqm_cos_bp_en), sizeof(dqm_cos_bp_en)); + + /* UQM */ + /* RX bp: uqm receive loopback/emp/rdma_e/rdma_h/l4s_e/l4s_h port bp */ + /* uqm rx_port_bp_en_cfg is ok */ + /* RX bp: uqm receive loopback/emp/rdma_e/rdma_h/l4s_e/l4s_h port bp */ + /* uqm tx_port_bp_en_cfg is ok */ + + /* RX bp: uqm receive loopback/emp/rdma_e/rdma_h/l4s_e/l4s_h cos bp */ + /* uqm rx_cos_bp_en */ + uqm_rx_cos_bp_en.vld_l = 0xFFFFFFFF; + uqm_rx_cos_bp_en.vld_h = 0xFFFF; + nbl_hw_wr_regs(hw_mgt, NBL_UQM_RX_COS_BP_EN, (u8 *)(&uqm_rx_cos_bp_en), + sizeof(uqm_rx_cos_bp_en)); + + /* RX bp: uqm send received loopback/emp/rdma_e/rdma_h/l4s_e/l4s_h cos + * bp to USTORE + */ + /* uqm tx_cos_bp_en */ + uqm_tx_cos_bp_en.vld_l = 0xFFFFFFFF; + uqm_tx_cos_bp_en.vld_h = 0xFF; + nbl_hw_wr_regs(hw_mgt, NBL_UQM_TX_COS_BP_EN, (u8 *)(&uqm_tx_cos_bp_en), + sizeof(uqm_tx_cos_bp_en)); + + /* TX bp: DSCH dp0-3 response to DQM dp0-3 pfc/port bp */ + /* dsch_dpt_pfc_map_vnh default value is ok */ + /* TX bp: DSCH response to DQM cos bp, pkt_cos -> sch_cos map table */ + /* dsch vn_host_dpx_prixx_p2s_map_cfg is ok */ + + /* downstream: enable modify packet pri */ + /* epro port_pri_mdf_en */ + pri_mdf_en_cfg.eth0 = 0; + pri_mdf_en_cfg.eth1 = 0; + pri_mdf_en_cfg.eth2 = 0; + pri_mdf_en_cfg.eth3 = 0; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_PORT_PRI_MDF_EN, + (u8 *)(&pri_mdf_en_cfg), sizeof(pri_mdf_en_cfg)); + + for (i = 0; i < ether_ports; i++) { + /* set default bp_mode: port */ + /* RX bp: USTORE port bp th, enable send pause frame */ + /* ustore port_fc_th */ + ustore_port_fc_th.xoff_th = 0x190; + ustore_port_fc_th.xon_th = 0x190; + ustore_port_fc_th.fc_set = 0; + ustore_port_fc_th.fc_en = 1; + nbl_hw_wr_regs(hw_mgt, NBL_USTORE_PORT_FC_TH_REG_ARR(i), + (u8 *)(&ustore_port_fc_th), + sizeof(ustore_port_fc_th)); + + for (j = 0; j < 8; j++) { + /* RX bp: ustore cos bp th, disable send pfc frame */ + /* ustore cos_fc_th */ + ustore_cos_fc_th.xoff_th = 0x64; + ustore_cos_fc_th.xon_th = 0x64; + ustore_cos_fc_th.fc_set = 0; + ustore_cos_fc_th.fc_en = 0; + nbl_hw_wr_regs(hw_mgt, + NBL_USTORE_COS_FC_TH_REG_ARR(i * 8 + j), + (u8 *)(&ustore_cos_fc_th), + sizeof(ustore_cos_fc_th)); + + /* downstream: sch_cos->pkt_cos or sch_cos->dscp */ + /* epro sch_cos_map */ + cos_map.pkt_cos = j; + cos_map.dscp = j << 3; + nbl_hw_wr_regs(hw_mgt, NBL_EPRO_SCH_COS_MAP_TABLE(i, j), + (u8 *)(&cos_map), sizeof(cos_map)); + } + } + + /* upstream: pkt dscp/802.1p -> sch_cos */ + for (i = 0; i < ether_ports; i++) { + /* upstream: when pfc_mode is 802.1p, + * vlan pri -> sch_cos map table + */ + /* upa pri_conf_table */ + conf_table.pri0 = 0; + conf_table.pri1 = 1; + conf_table.pri2 = 2; + conf_table.pri3 = 3; + conf_table.pri4 = 4; + conf_table.pri5 = 5; + conf_table.pri6 = 6; + conf_table.pri7 = 7; + nbl_hw_wr_regs(hw_mgt, NBL_UPA_PRI_CONF_TABLE(i * 8), + (u8 *)(&conf_table), sizeof(conf_table)); + + /* upstream: set default pfc_mode is 802.1p, use outer vlan */ + /* upa pri_sel_conf */ + sel_conf.pri_sel = (1 << 4 | 1 << 3); + nbl_hw_wr_regs(hw_mgt, NBL_UPA_PRI_SEL_CONF_TABLE(i), + (u8 *)(&sel_conf), sizeof(sel_conf)); + } } static void nbl_hw_enable_mailbox_irq(void *priv, u16 func_id, bool enable_msix, @@ -361,6 +1974,25 @@ static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus, (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map)); } +static void nbl_hw_set_promisc_mode(void *priv, u16 vsi_id, u16 eth_id, + u16 mode) +{ + struct nbl_ipro_upsport_tbl upsport; + + nbl_hw_rd_regs(priv, NBL_IPRO_UP_SPORT_TABLE(eth_id), (u8 *)&upsport, + sizeof(upsport)); + if (mode) { + upsport.set_dport.dport.up.upcall_flag = AUX_FWD_TYPE_NML_FWD; + upsport.set_dport.dport.up.port_type = SET_DPORT_TYPE_VSI_HOST; + upsport.set_dport.dport.up.port_id = vsi_id; + upsport.set_dport.dport.up.next_stg_sel = NEXT_STG_SEL_NONE; + } else { + upsport.set_dport.data = 0xFFF; + } + nbl_hw_wr_regs(priv, NBL_IPRO_UP_SPORT_TABLE(eth_id), (u8 *)&upsport, + sizeof(upsport)); +} + static void nbl_hw_set_coalesce(void *priv, u16 interrupt_id, u16 pnum, u16 rate) { @@ -437,7 +2069,7 @@ static void nbl_hw_cfg_adminq_qinfo(void *priv, u16 bus, u16 devid, u16 function) { struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; - struct nbl_adminq_qinfo_map_table adminq_qinfo_map = {0}; + struct nbl_adminq_qinfo_map_table adminq_qinfo_map = { 0 }; memset(&adminq_qinfo_map, 0, sizeof(adminq_qinfo_map)); adminq_qinfo_map.function = function; @@ -699,6 +2331,20 @@ nbl_hw_process_abnormal_event(void *priv, return ret; } +static u32 nbl_hw_get_uvn_desc_entry_stats(void *priv) +{ + return nbl_hw_rd32(priv, NBL_UVN_DESC_RD_ENTRY); +} + +static void nbl_hw_set_uvn_desc_wr_timeout(void *priv, u16 timeout) +{ + struct uvn_desc_wr_timeout wr_timeout = { 0 }; + + wr_timeout.num = timeout; + nbl_hw_wr_regs(priv, NBL_UVN_DESC_WR_TIMEOUT, (u8 *)&wr_timeout, + sizeof(wr_timeout)); +} + static void nbl_hw_get_board_info(void *priv, struct nbl_board_port_info *board_info) { @@ -747,6 +2393,44 @@ static enum nbl_hw_status nbl_hw_get_hw_status(void *priv) }; static struct nbl_hw_ops hw_ops = { + .init_chip_module = nbl_hw_init_chip_module, + .deinit_chip_module = nbl_hw_deinit_chip_module, + .init_qid_map_table = nbl_hw_init_qid_map_table, + .set_qid_map_table = nbl_hw_set_qid_map_table, + .set_qid_map_ready = nbl_hw_set_qid_map_ready, + .cfg_ipro_queue_tbl = nbl_hw_cfg_ipro_queue_tbl, + .cfg_ipro_dn_sport_tbl = nbl_hw_cfg_ipro_dn_sport_tbl, + .set_vnet_queue_info = nbl_hw_set_vnet_queue_info, + .clear_vnet_queue_info = nbl_hw_clear_vnet_queue_info, + .reset_dvn_cfg = nbl_hw_reset_dvn_cfg, + .reset_uvn_cfg = nbl_hw_reset_uvn_cfg, + .restore_dvn_context = nbl_hw_restore_dvn_context, + .restore_uvn_context = nbl_hw_restore_uvn_context, + .get_tx_queue_cfg = nbl_hw_get_tx_queue_cfg, + .get_rx_queue_cfg = nbl_hw_get_rx_queue_cfg, + .cfg_tx_queue = nbl_hw_cfg_tx_queue, + .cfg_rx_queue = nbl_hw_cfg_rx_queue, + .check_q2tc = nbl_hw_check_q2tc, + .cfg_q2tc_netid = nbl_hw_cfg_q2tc_netid, + .active_shaping = nbl_hw_active_shaping, + .deactive_shaping = nbl_hw_deactive_shaping, + .set_shaping = nbl_hw_set_shaping, + .set_ucar = nbl_hw_set_ucar, + .cfg_dsch_net_to_group = nbl_hw_cfg_dsch_net_to_group, + .init_epro_rss_key = nbl_hw_init_epro_rss_key, + .init_epro_vpt_tbl = nbl_hw_init_epro_vpt_tbl, + .cfg_epro_rss_ret = nbl_hw_cfg_epro_rss_ret, + .set_epro_rss_pt = nbl_hw_set_epro_rss_pt, + .clear_epro_rss_pt = nbl_hw_clear_epro_rss_pt, + .set_promisc_mode = nbl_hw_set_promisc_mode, + .disable_dvn = nbl_hw_disable_dvn, + .disable_uvn = nbl_hw_disable_uvn, + .lso_dsch_drain = nbl_hw_lso_dsch_drain, + .rsc_cache_drain = nbl_hw_rsc_cache_drain, + .save_dvn_ctx = nbl_hw_save_dvn_ctx, + .save_uvn_ctx = nbl_hw_save_uvn_ctx, + .setup_queue_switch = nbl_hw_setup_queue_switch, + .init_pfc = nbl_hw_init_pfc, .configure_msix_map = nbl_hw_configure_msix_map, .configure_msix_info = nbl_hw_configure_msix_info, .set_coalesce = nbl_hw_set_coalesce, @@ -781,7 +2465,10 @@ static struct nbl_hw_ops hw_ops = { .set_fw_ping = nbl_hw_set_fw_ping, .get_fw_pong = nbl_hw_get_fw_pong, .set_fw_pong = nbl_hw_set_fw_pong, + .process_abnormal_event = nbl_hw_process_abnormal_event, + .get_uvn_desc_entry_stats = nbl_hw_get_uvn_desc_entry_stats, + .set_uvn_desc_wr_timeout = nbl_hw_set_uvn_desc_wr_timeout, .get_fw_eth_num = nbl_hw_get_fw_eth_num, .get_fw_eth_map = nbl_hw_get_fw_eth_map, .get_board_info = nbl_hw_get_board_info, diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c new file mode 100644 index 000000000000..a4a70d9b8f74 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.c @@ -0,0 +1,1430 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include <linux/if_bridge.h> +#include "nbl_queue_leonis.h" +#include "nbl_resource_leonis.h" + +static int nbl_res_queue_reset_uvn_pkt_drop_stats(void *priv, u16 func_id, + u16 global_queue_id); + +static struct nbl_queue_vsi_info * +nbl_res_queue_get_vsi_info(struct nbl_resource_mgt *res_mgt, u16 vsi_id) +{ + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info; + u16 func_id; + int i; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + queue_info = &queue_mgt->queue_info[func_id]; + + for (i = 0; i < NBL_VSI_MAX; i++) + if (queue_info->vsi_info[i].vsi_id == vsi_id) + return &queue_info->vsi_info[i]; + + return NULL; +} + +static int nbl_res_queue_get_net_id(u16 func_id, u16 vsi_type) +{ + int net_id; + + switch (vsi_type) { + case NBL_VSI_DATA: + case NBL_VSI_CTRL: + net_id = func_id + NBL_SPECIFIC_VSI_NET_ID_OFFSET; + break; + default: + net_id = func_id; + break; + } + + return net_id; +} + +static int nbl_res_queue_setup_queue_info(struct nbl_resource_mgt *res_mgt, + u16 func_id, u16 num_queues) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + u16 *txrx_queues, *queues_context; + u32 *uvn_stat_pkt_drop; + u16 queue_index; + int i, ret = 0; + + nbl_debug(common, "Setup qid map, func_id:%d, num_queues:%d", func_id, + num_queues); + + txrx_queues = kcalloc(num_queues, sizeof(txrx_queues[0]), GFP_ATOMIC); + if (!txrx_queues) { + ret = -ENOMEM; + goto alloc_txrx_queues_fail; + } + + queues_context = + kcalloc(num_queues * 2, sizeof(txrx_queues[0]), GFP_ATOMIC); + if (!queues_context) { + ret = -ENOMEM; + goto alloc_queue_contex_fail; + } + + uvn_stat_pkt_drop = + kcalloc(num_queues, sizeof(*uvn_stat_pkt_drop), GFP_ATOMIC); + if (!uvn_stat_pkt_drop) { + ret = -ENOMEM; + goto alloc_uvn_stat_pkt_drop_fail; + } + + queue_info->num_txrx_queues = num_queues; + queue_info->txrx_queues = txrx_queues; + queue_info->queues_context = queues_context; + queue_info->uvn_stat_pkt_drop = uvn_stat_pkt_drop; + + for (i = 0; i < num_queues; i++) { + queue_index = find_first_zero_bit(queue_mgt->txrx_queue_bitmap, + NBL_MAX_TXRX_QUEUE); + if (queue_index == NBL_MAX_TXRX_QUEUE) { + ret = -ENOSPC; + goto get_txrx_queue_fail; + } + txrx_queues[i] = queue_index; + set_bit(queue_index, queue_mgt->txrx_queue_bitmap); + } + return 0; + +get_txrx_queue_fail: + kfree(uvn_stat_pkt_drop); + while (--i + 1) { + queue_index = txrx_queues[i]; + clear_bit(queue_index, queue_mgt->txrx_queue_bitmap); + } + queue_info->num_txrx_queues = 0; + queue_info->txrx_queues = NULL; +alloc_uvn_stat_pkt_drop_fail: + kfree(queues_context); +alloc_queue_contex_fail: + kfree(txrx_queues); +alloc_txrx_queues_fail: + return ret; +} + +static void nbl_res_queue_remove_queue_info(struct nbl_resource_mgt *res_mgt, + u16 func_id) +{ + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + u16 i; + + for (i = 0; i < queue_info->num_txrx_queues; i++) + clear_bit(queue_info->txrx_queues[i], + queue_mgt->txrx_queue_bitmap); + + kfree(queue_info->txrx_queues); + kfree(queue_info->queues_context); + kfree(queue_info->uvn_stat_pkt_drop); + queue_info->txrx_queues = NULL; + queue_info->queues_context = NULL; + queue_info->uvn_stat_pkt_drop = NULL; + + queue_info->num_txrx_queues = 0; +} + +static u64 nbl_res_queue_qid_map_key(struct nbl_qid_map_table *map) +{ + return ((u64)map->notify_addr_h + << NBL_QID_MAP_NOTIFY_ADDR_LOW_PART_LEN) | + (u64)map->notify_addr_l; +} + +static void nbl_res_queue_set_qid_map_table(struct nbl_resource_mgt *res_mgt, + u16 tail) +{ + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_qid_map_param param; + int i; + + param.qid_map = kcalloc(tail, sizeof(param.qid_map[0]), GFP_ATOMIC); + if (!param.qid_map) + return; + + for (i = 0; i < tail; i++) + param.qid_map[i] = queue_mgt->qid_map_table[i]; + + param.start = 0; + param.len = tail; + + hw_ops->set_qid_map_table(NBL_RES_MGT_TO_HW_PRIV(res_mgt), ¶m, + queue_mgt->qid_map_select); + queue_mgt->qid_map_select = !queue_mgt->qid_map_select; + + if (!queue_mgt->qid_map_ready) { + hw_ops->set_qid_map_ready(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + true); + queue_mgt->qid_map_ready = true; + } + + kfree(param.qid_map); +} + +int nbl_res_queue_setup_qid_map_table_leonis(struct nbl_resource_mgt *res_mgt, + u16 func_id, u64 notify_addr) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + struct nbl_qid_map_table qid_map; + u16 *txrx_queues = queue_info->txrx_queues; + u16 qid_map_entries = queue_info->num_txrx_queues, qid_map_base, tail; + u64 key, tmp; + int i; + + /* Get base location */ + queue_info->notify_addr = notify_addr; + key = notify_addr >> NBL_QID_MAP_NOTIFY_ADDR_SHIFT; + + for (i = 0; i < NBL_QID_MAP_TABLE_ENTRIES; i++) { + tmp = nbl_res_queue_qid_map_key(&queue_mgt->qid_map_table[i]); + WARN_ON(key == tmp); + if (key < tmp) { + qid_map_base = i; + break; + } + } + if (i == NBL_QID_MAP_TABLE_ENTRIES) { + nbl_err(common, "No valid qid map key for func %d", func_id); + return -ENOSPC; + } + + /* Calc tail, we will set the qid_map from 0 to tail. + * We have to make sure that this range (0, tail) can cover all the + * changes, which need to consider all the two tables. Therefore, it is + * necessary to store each table's tail, and always use the larger one + * between this table's tail and the added tail. + * + * The reason can be illustrated in the following example: + * Step 1: del some entries, which happens on table 1, and each table + * could be + * Table 0: 0 - 31 used + * Table 1: 0 - 15 used + * SW : queue_mgt->total_qid_map_entries = 16 + * Step 2: add 2 entries, which happens on table 0, if we use 16 + 2 + * as the tail, then + * Table 0: 0 - 17 correctly added, 18 - 31 garbage data + * Table 1: 0 - 15 used + * SW : queue_mgt->total_qid_map_entries = 18 + * And this is definitely wrong, it should use 32, table 0's original + * tail + */ + queue_mgt->total_qid_map_entries += qid_map_entries; + tail = max(queue_mgt->total_qid_map_entries, + queue_mgt->qid_map_tail[queue_mgt->qid_map_select]); + queue_mgt->qid_map_tail[queue_mgt->qid_map_select] = + queue_mgt->total_qid_map_entries; + + /* Update qid map */ + for (i = NBL_QID_MAP_TABLE_ENTRIES - qid_map_entries; i > qid_map_base; + i--) + queue_mgt->qid_map_table[i - 1 + qid_map_entries] = + queue_mgt->qid_map_table[i - 1]; + + for (i = 0; i < queue_info->num_txrx_queues; i++) { + qid_map.local_qid = 2 * i + 1; + qid_map.notify_addr_l = key; + qid_map.notify_addr_h = key >> + NBL_QID_MAP_NOTIFY_ADDR_LOW_PART_LEN; + qid_map.global_qid = txrx_queues[i]; + qid_map.ctrlq_flag = 0; + queue_mgt->qid_map_table[qid_map_base + i] = qid_map; + } + + nbl_res_queue_set_qid_map_table(res_mgt, tail); + + return 0; +} + +void nbl_res_queue_remove_qid_map_table_leonis(struct nbl_resource_mgt *res_mgt, + u16 func_id) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + struct nbl_qid_map_table qid_map; + u64 key; + u16 qid_map_entries = queue_info->num_txrx_queues, qid_map_base, tail; + int i; + + /* Get base location */ + key = queue_info->notify_addr >> NBL_QID_MAP_NOTIFY_ADDR_SHIFT; + + for (i = 0; i < NBL_QID_MAP_TABLE_ENTRIES; i++) { + if (key == + nbl_res_queue_qid_map_key(&queue_mgt->qid_map_table[i])) { + qid_map_base = i; + break; + } + } + if (i == NBL_QID_MAP_TABLE_ENTRIES) { + nbl_err(common, "No valid qid map key for func %d", func_id); + return; + } + + /* Calc tail, we will set the qid_map from 0 to tail. + * We have to make sure that this range (0, tail) can cover all the + * changes, which need to consider all the two tables. Therefore, it + * is necessary to store each table's tail, and always use the larger + * one between this table's tail and the driver-stored tail. + * + * The reason can be illustrated in the following example: + * Step 1: del some entries, which happens on table 1, and each table + * could be + * Table 0: 0 - 31 used + * Table 1: 0 - 15 used + * SW : queue_mgt->total_qid_map_entries = 16 + * Step 2: del 2 entries, which happens on table 0, if we use 16 as + * the tail, then + * Table 0: 0 - 13 correct, 14 - 31 garbage data + * Table 1: 0 - 15 used + * SW : queue_mgt->total_qid_map_entries = 14 + * And this is definitely wrong, it should use 32, table 0's original + * tail + */ + tail = max(queue_mgt->total_qid_map_entries, + queue_mgt->qid_map_tail[queue_mgt->qid_map_select]); + queue_mgt->total_qid_map_entries -= qid_map_entries; + queue_mgt->qid_map_tail[queue_mgt->qid_map_select] = + queue_mgt->total_qid_map_entries; + + /* Update qid map */ + memset(&qid_map, U8_MAX, sizeof(qid_map)); + + for (i = qid_map_base; i < NBL_QID_MAP_TABLE_ENTRIES - qid_map_entries; + i++) + queue_mgt->qid_map_table[i] = + queue_mgt->qid_map_table[i + qid_map_entries]; + for (; i < NBL_QID_MAP_TABLE_ENTRIES; i++) + queue_mgt->qid_map_table[i] = qid_map; + + nbl_res_queue_set_qid_map_table(res_mgt, tail); +} + +static int nbl_res_queue_get_rss_ret_base(struct nbl_resource_mgt *res_mgt, + u16 count, u16 rss_entry_size, + struct nbl_queue_vsi_info *vsi_info) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + u32 rss_ret_base_start; + u32 rss_ret_base_end; + u16 func_id; + u16 rss_entry_count; + u16 index, i, j, k; + int success = 1; + int ret = -EFAULT; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_info->vsi_id); + if (func_id < NBL_MAX_ETHERNET && + vsi_info->vsi_index == NBL_VSI_DATA) { + rss_ret_base_start = 0; + rss_ret_base_end = NBL_EPRO_PF_RSS_RET_TBL_DEPTH; + vsi_info->rss_entry_size = NBL_EPRO_PF_RSS_ENTRY_SIZE; + rss_entry_count = NBL_EPRO_PF_RSS_RET_TBL_COUNT; + } else { + rss_ret_base_start = NBL_EPRO_PF_RSS_RET_TBL_DEPTH; + rss_ret_base_end = NBL_EPRO_RSS_RET_TBL_DEPTH; + vsi_info->rss_entry_size = rss_entry_size; + rss_entry_count = count; + } + + for (i = rss_ret_base_start; i < rss_ret_base_end;) { + index = find_next_zero_bit(queue_mgt->rss_ret_bitmap, + rss_ret_base_end, i); + if (index == rss_ret_base_end) { + nbl_err(common, "There is no available rss ret left"); + break; + } + + success = 1; + for (j = index + 1; j < (index + rss_entry_count); j++) { + if (j >= rss_ret_base_end) { + success = 0; + break; + } + + if (test_bit(j, queue_mgt->rss_ret_bitmap)) { + success = 0; + break; + } + } + if (success) { + for (k = index; k < (index + rss_entry_count); k++) + set_bit(k, queue_mgt->rss_ret_bitmap); + vsi_info->rss_ret_base = index; + ret = 0; + break; + } + i = j; + } + + return ret; +} + +static int nbl_res_queue_setup_q2vsi(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_info *queue_info = NULL; + struct nbl_queue_vsi_info *vsi_info = NULL; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u16 func_id; + u16 qid; + int ret = 0, i; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + queue_info = &queue_mgt->queue_info[func_id]; + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return -ENOENT; + + /* config ipro queue tbl */ + for (i = vsi_info->queue_offset; + i < vsi_info->queue_offset + vsi_info->queue_num && + i < queue_info->num_txrx_queues; + i++) { + qid = queue_info->txrx_queues[i]; + ret = hw_ops->cfg_ipro_queue_tbl(p, qid, vsi_id, 1); + if (ret) { + while (--i + 1) + hw_ops->cfg_ipro_queue_tbl(p, qid, 0, 0); + return ret; + } + } + + return 0; +} + +static void nbl_res_queue_remove_q2vsi(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_info *queue_info = NULL; + struct nbl_queue_vsi_info *vsi_info = NULL; + u16 func_id; + int i; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + queue_info = &queue_mgt->queue_info[func_id]; + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return; + + /*config ipro queue tbl*/ + for (i = vsi_info->queue_offset; + i < vsi_info->queue_offset + vsi_info->queue_num && + i < queue_info->num_txrx_queues; + i++) + hw_ops->cfg_ipro_queue_tbl(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + queue_info->txrx_queues[i], 0, 0); +} + +static int nbl_res_queue_setup_rss(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_vsi_info *vsi_info = NULL; + u16 rss_entry_size, count; + int ret = 0; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return -ENOENT; + + rss_entry_size = + (vsi_info->queue_num + NBL_EPRO_RSS_ENTRY_SIZE_UNIT - 1) / + NBL_EPRO_RSS_ENTRY_SIZE_UNIT; + + rss_entry_size = ilog2(roundup_pow_of_two(rss_entry_size)); + count = NBL_EPRO_RSS_ENTRY_SIZE_UNIT << rss_entry_size; + + ret = nbl_res_queue_get_rss_ret_base(res_mgt, count, rss_entry_size, + vsi_info); + if (ret) + return -ENOSPC; + + vsi_info->rss_vld = true; + + return 0; +} + +static void nbl_res_queue_remove_rss(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_vsi_info *vsi_info = NULL; + u16 rss_ret_base, rss_entry_size, count; + int i; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return; + + if (!vsi_info->rss_vld) + return; + + rss_ret_base = vsi_info->rss_ret_base; + rss_entry_size = vsi_info->rss_entry_size; + count = NBL_EPRO_RSS_ENTRY_SIZE_UNIT << rss_entry_size; + + for (i = rss_ret_base; i < (rss_ret_base + count); i++) + clear_bit(i, queue_mgt->rss_ret_bitmap); + + vsi_info->rss_vld = false; +} + +static void +nbl_res_queue_setup_queue_cfg(struct nbl_queue_mgt *queue_mgt, + struct nbl_queue_cfg_param *cfg_param, + struct nbl_txrx_queue_param *queue_param, + bool is_tx, u16 func_id) +{ + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + + cfg_param->desc = queue_param->dma; + cfg_param->size = queue_param->desc_num; + cfg_param->global_vector = queue_param->global_vec_id; + cfg_param->global_queue_id = + queue_info->txrx_queues[queue_param->local_queue_id]; + + cfg_param->avail = queue_param->avail; + cfg_param->used = queue_param->used; + cfg_param->extend_header = queue_param->extend_header; + cfg_param->split = queue_param->split; + cfg_param->last_avail_idx = queue_param->cxt; + + cfg_param->intr_en = queue_param->intr_en; + cfg_param->intr_mask = queue_param->intr_mask; + + cfg_param->tx = is_tx; + cfg_param->rxcsum = queue_param->rxcsum; + cfg_param->half_offload_en = queue_param->half_offload_en; +} + +static void nbl_res_queue_update_netid_refnum(struct nbl_queue_mgt *queue_mgt, + u16 net_id, bool add) +{ + if (net_id >= NBL_MAX_NET_ID) + return; + + if (add) { + queue_mgt->net_id_ref_vsinum[net_id]++; + } else { + /* probe call clear_queue first, so judge nor zero to support + * disable dsch more than once + */ + if (queue_mgt->net_id_ref_vsinum[net_id]) + queue_mgt->net_id_ref_vsinum[net_id]--; + } +} + +static u16 nbl_res_queue_get_netid_refnum(struct nbl_queue_mgt *queue_mgt, + u16 net_id) +{ + if (net_id >= NBL_MAX_NET_ID) + return 0; + + return queue_mgt->net_id_ref_vsinum[net_id]; +} + +static void nbl_res_queue_setup_hw_dq(struct nbl_resource_mgt *res_mgt, + struct nbl_queue_cfg_param *queue_cfg, + u16 func_id, u16 vsi_id) +{ + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + struct nbl_queue_vsi_info *vsi_info; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_vnet_queue_info_param param = {0}; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u16 global_qid = queue_cfg->global_queue_id; + u8 bus, dev, func; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return; + + nbl_res_func_id_to_bdf(res_mgt, func_id, &bus, &dev, &func); + queue_info->split = queue_cfg->split; + queue_info->queue_size = queue_cfg->size; + + param.function_id = func; + param.device_id = dev; + param.bus_id = bus; + param.valid = 1; + + if (queue_cfg->intr_en) { + param.msix_idx = queue_cfg->global_vector; + param.msix_idx_valid = 1; + } + + if (queue_cfg->tx) { + hw_ops->set_vnet_queue_info(p, ¶m, + NBL_PAIR_ID_GET_TX(global_qid)); + hw_ops->reset_dvn_cfg(p, global_qid); + if (!queue_cfg->extend_header) + hw_ops->restore_dvn_context(p, global_qid, + queue_cfg->split, + queue_cfg->last_avail_idx); + hw_ops->cfg_tx_queue(p, queue_cfg, global_qid); + if (nbl_res_queue_get_netid_refnum(queue_mgt, vsi_info->net_id)) + hw_ops->cfg_q2tc_netid(p, global_qid, + vsi_info->net_id, 1); + + } else { + hw_ops->set_vnet_queue_info(p, ¶m, + NBL_PAIR_ID_GET_RX(global_qid)); + hw_ops->reset_uvn_cfg(p, global_qid); + nbl_res_queue_reset_uvn_pkt_drop_stats(res_mgt, func_id, + global_qid); + if (!queue_cfg->extend_header) + hw_ops->restore_uvn_context(p, global_qid, + queue_cfg->split, + queue_cfg->last_avail_idx); + hw_ops->cfg_rx_queue(p, queue_cfg, global_qid); + } +} + +static void nbl_res_queue_remove_all_hw_dq(struct nbl_resource_mgt *res_mgt, + u16 func_id, + struct nbl_queue_vsi_info *vsi_info) +{ + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u16 start = vsi_info->queue_offset, + end = vsi_info->queue_offset + vsi_info->queue_num; + u16 global_queue; + int i; + + for (i = start; i < end; i++) { + global_queue = queue_info->txrx_queues[i]; + + hw_ops->lso_dsch_drain(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + hw_ops->disable_dvn(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + } + + for (i = start; i < end; i++) { + global_queue = queue_info->txrx_queues[i]; + + hw_ops->disable_uvn(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + hw_ops->rsc_cache_drain(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + } + + for (i = start; i < end; i++) { + global_queue = queue_info->txrx_queues[i]; + queue_info->queues_context[NBL_PAIR_ID_GET_RX(i)] = + hw_ops->save_uvn_ctx(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue, queue_info->split, + queue_info->queue_size); + queue_info->queues_context[NBL_PAIR_ID_GET_TX(i)] = + hw_ops->save_dvn_ctx(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue, queue_info->split); + } + + for (i = start; i < end; i++) { + global_queue = queue_info->txrx_queues[i]; + hw_ops->reset_uvn_cfg(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + nbl_res_queue_reset_uvn_pkt_drop_stats(res_mgt, func_id, + global_queue); + hw_ops->reset_dvn_cfg(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + } + + for (i = start; i < end; i++) { + global_queue = queue_info->txrx_queues[i]; + hw_ops->clear_vnet_queue_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + NBL_PAIR_ID_GET_RX(global_queue)); + hw_ops->clear_vnet_queue_info(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + NBL_PAIR_ID_GET_TX(global_queue)); + } +} + +int nbl_res_queue_init_qid_map_table(struct nbl_resource_mgt *res_mgt, + struct nbl_queue_mgt *queue_mgt, + struct nbl_hw_ops *hw_ops) +{ + struct nbl_qid_map_table invalid_qid_map; + u16 i; + + queue_mgt->qid_map_ready = 0; + queue_mgt->qid_map_select = NBL_MASTER_QID_MAP_TABLE; + + memset(&invalid_qid_map, 0, sizeof(invalid_qid_map)); + invalid_qid_map.local_qid = 0x1FF; + invalid_qid_map.notify_addr_l = 0x7FFFFF; + invalid_qid_map.notify_addr_h = 0xFFFFFFFF; + invalid_qid_map.global_qid = 0xFFF; + invalid_qid_map.ctrlq_flag = 0X1; + + for (i = 0; i < NBL_QID_MAP_TABLE_ENTRIES; i++) + queue_mgt->qid_map_table[i] = invalid_qid_map; + + hw_ops->init_qid_map_table(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + + return 0; +} + +static int nbl_res_queue_init_epro_rss_key(struct nbl_resource_mgt *res_mgt, + struct nbl_hw_ops *hw_ops) +{ + int ret = 0; + + ret = hw_ops->init_epro_rss_key(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + return ret; +} + +static int nbl_res_queue_init_epro_vpt_table(struct nbl_resource_mgt *res_mgt, + u16 func_id) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_sriov_info *sriov_info = + &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt)[func_id]; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + int pfid, vfid; + u16 vsi_id, vf_vsi_id; + u16 i; + + vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, + NBL_VSI_SERV_PF_DATA_TYPE); + nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pfid, &vfid); + + if (sriov_info->bdf != 0) { + /* init pf vsi */ + for (i = NBL_VSI_SERV_PF_DATA_TYPE; + i <= NBL_VSI_SERV_PF_USER_TYPE; i++) { + vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, i); + hw_ops->init_epro_vpt_tbl(p, vsi_id); + } + + for (vfid = 0; vfid < sriov_info->num_vfs; vfid++) { + vf_vsi_id = nbl_res_pfvfid_to_vsi_id(res_mgt, pfid, + vfid, + NBL_VSI_DATA); + if (vf_vsi_id == 0xFFFF) + continue; + + hw_ops->init_epro_vpt_tbl(p, vf_vsi_id); + } + } + + return 0; +} + +static int +nbl_res_queue_init_ipro_dn_sport_tbl(struct nbl_resource_mgt *res_mgt, + u16 func_id, u16 bmode, bool binit) + +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_sriov_info *sriov_info = + &NBL_RES_MGT_TO_SRIOV_INFO(res_mgt)[func_id]; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + int pfid, vfid; + u16 eth_id, vsi_id, vf_vsi_id; + int i; + + vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, + NBL_VSI_SERV_PF_DATA_TYPE); + nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pfid, &vfid); + + if (sriov_info->bdf != 0) { + eth_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi_id); + + for (i = 0; i < NBL_VSI_MAX; i++) + hw_ops->cfg_ipro_dn_sport_tbl(p, vsi_id + i, eth_id, + bmode, binit); + + for (vfid = 0; vfid < sriov_info->num_vfs; vfid++) { + vf_vsi_id = nbl_res_pfvfid_to_vsi_id(res_mgt, pfid, + vfid, + NBL_VSI_DATA); + if (vf_vsi_id == 0xFFFF) + continue; + + hw_ops->cfg_ipro_dn_sport_tbl(p, vf_vsi_id, eth_id, + bmode, binit); + } + } + + return 0; +} + +static int nbl_res_queue_init_rss(struct nbl_resource_mgt *res_mgt, + struct nbl_queue_mgt *queue_mgt, + struct nbl_hw_ops *hw_ops) +{ + return nbl_res_queue_init_epro_rss_key(res_mgt, hw_ops); +} + +static int nbl_res_queue_alloc_txrx_queues(void *priv, u16 vsi_id, + u16 queue_num) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u64 notify_addr; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + int ret = 0; + + notify_addr = nbl_res_get_func_bar_base_addr(res_mgt, func_id); + + ret = nbl_res_queue_setup_queue_info(res_mgt, func_id, queue_num); + if (ret) + goto setup_queue_info_fail; + + ret = nbl_res_queue_setup_qid_map_table_leonis(res_mgt, func_id, + notify_addr); + if (ret) + goto setup_qid_map_fail; + + return 0; + +setup_qid_map_fail: + nbl_res_queue_remove_queue_info(res_mgt, func_id); +setup_queue_info_fail: + return ret; +} + +static void nbl_res_queue_free_txrx_queues(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + + nbl_res_queue_remove_qid_map_table_leonis(res_mgt, func_id); + nbl_res_queue_remove_queue_info(res_mgt, func_id); +} + +static int nbl_res_queue_setup_queue(void *priv, + struct nbl_txrx_queue_param *param, + bool is_tx) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_cfg_param cfg_param = { 0 }; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, param->vsi_id); + + nbl_res_queue_setup_queue_cfg(NBL_RES_MGT_TO_QUEUE_MGT(res_mgt), + &cfg_param, param, is_tx, func_id); + nbl_res_queue_setup_hw_dq(res_mgt, &cfg_param, func_id, param->vsi_id); + return 0; +} + +static void nbl_res_queue_remove_all_queues(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + struct nbl_queue_vsi_info *vsi_info = NULL; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return; + + nbl_res_queue_remove_all_hw_dq(res_mgt, func_id, vsi_info); +} + +static int nbl_res_queue_register_vsi2q(void *priv, u16 vsi_index, u16 vsi_id, + u16 queue_offset, u16 queue_num) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = NULL; + struct nbl_queue_vsi_info *vsi_info = NULL; + u16 func_id; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + queue_info = &queue_mgt->queue_info[func_id]; + vsi_info = &queue_info->vsi_info[vsi_index]; + + memset(vsi_info, 0, sizeof(*vsi_info)); + vsi_info->vld = 1; + vsi_info->vsi_index = vsi_index; + vsi_info->vsi_id = vsi_id; + vsi_info->queue_offset = queue_offset; + vsi_info->queue_num = queue_num; + vsi_info->net_id = + nbl_res_queue_get_net_id(func_id, vsi_info->vsi_index); + + return 0; +} + +static int nbl_res_queue_cfg_dsch(void *priv, u16 vsi_id, bool vld) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_vsi_info *vsi_info; + /* group_id is same with eth_id */ + u16 group_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi_id); + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u16 start = 0, end = 0; + int i, ret = 0; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return -ENOENT; + + start = vsi_info->queue_offset; + end = vsi_info->queue_num + vsi_info->queue_offset; + + /* When setting up, g2p -> n2g -> q2tc; when down, q2tc -> n2g -> g2p */ + if (!vld) { + hw_ops->deactive_shaping(p, + vsi_info->net_id); + for (i = start; i < end; i++) + hw_ops->cfg_q2tc_netid(p, + queue_info->txrx_queues[i], + vsi_info->net_id, vld); + nbl_res_queue_update_netid_refnum(queue_mgt, vsi_info->net_id, + false); + } + + if (!nbl_res_queue_get_netid_refnum(queue_mgt, vsi_info->net_id)) { + ret = hw_ops->cfg_dsch_net_to_group(p, vsi_info->net_id, + group_id, vld); + if (ret) + return ret; + } + + if (vld) { + for (i = start; i < end; i++) + hw_ops->cfg_q2tc_netid(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + queue_info->txrx_queues[i], + vsi_info->net_id, vld); + hw_ops->active_shaping(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + vsi_info->net_id); + nbl_res_queue_update_netid_refnum(queue_mgt, vsi_info->net_id, + true); + } + + return 0; +} + +static int nbl_res_queue_setup_cqs(void *priv, u16 vsi_id, u16 real_qps, + bool rss_indir_set) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info; + struct nbl_queue_vsi_info *vsi_info = NULL; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + void *q_list; + u16 func_id; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + queue_info = &queue_mgt->queue_info[func_id]; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return -ENOENT; + + if (real_qps == vsi_info->curr_qps) + return 0; + + if (real_qps && rss_indir_set) { + q_list = queue_info->txrx_queues + vsi_info->queue_offset; + hw_ops->cfg_epro_rss_ret(p, vsi_info->rss_ret_base, + vsi_info->rss_entry_size, real_qps, + q_list, NULL); + } + + if (!vsi_info->curr_qps) + hw_ops->set_epro_rss_pt(p, vsi_id, vsi_info->rss_ret_base, + vsi_info->rss_entry_size); + + vsi_info->curr_qps = real_qps; + vsi_info->curr_qps_static = real_qps; + return 0; +} + +static void nbl_res_queue_remove_cqs(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_vsi_info *vsi_info = NULL; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return; + + hw_ops->clear_epro_rss_pt(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id); + + vsi_info->curr_qps = 0; +} + +static int nbl_res_queue_init_switch(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + int i; + + for_each_set_bit(i, eth_info->eth_bitmap, NBL_MAX_ETHERNET) + hw_ops->setup_queue_switch(NBL_RES_MGT_TO_HW_PRIV(res_mgt), i); + + return 0; +} + +static int nbl_res_queue_init(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt; + struct nbl_hw_ops *hw_ops; + int i, ret = 0; + + if (!res_mgt) + return -EINVAL; + + queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + ret = nbl_res_queue_init_qid_map_table(res_mgt, queue_mgt, hw_ops); + if (ret) + goto init_queue_fail; + + ret = nbl_res_queue_init_rss(res_mgt, queue_mgt, hw_ops); + if (ret) + goto init_queue_fail; + + ret = nbl_res_queue_init_switch(res_mgt); + if (ret) + goto init_queue_fail; + + for (i = 0; i < NBL_RES_MGT_TO_PF_NUM(res_mgt); i++) { + nbl_res_queue_init_epro_vpt_table(res_mgt, i); + nbl_res_queue_init_ipro_dn_sport_tbl(res_mgt, i, + BRIDGE_MODE_VEB, true); + } + hw_ops->init_pfc(NBL_RES_MGT_TO_HW_PRIV(res_mgt), NBL_MAX_ETHERNET); + + return 0; + +init_queue_fail: + return ret; +} + +static u16 nbl_res_queue_get_local_queue_id(void *priv, u16 vsi_id, + u16 global_queue_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + int i; + + queue_info = &queue_mgt->queue_info[func_id]; + + if (queue_info->txrx_queues) + for (i = 0; i < queue_info->num_txrx_queues; i++) + if (global_queue_id == queue_info->txrx_queues[i]) + return i; + + return U16_MAX; +} + +static u16 nbl_res_queue_get_vsi_global_qid(void *priv, u16 vsi_id, + u16 local_qid) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + + if (!queue_info->num_txrx_queues) + return 0xffff; + + return queue_info->txrx_queues[local_qid]; +} + +static void nbl_res_queue_get_rxfh_indir_size(void *priv, u16 vsi_id, + u32 *rxfh_indir_size) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_vsi_info *vsi_info = NULL; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return; + + *rxfh_indir_size = NBL_EPRO_RSS_ENTRY_SIZE_UNIT + << vsi_info->rss_entry_size; +} + +static int nbl_res_queue_set_rxfh_indir(void *priv, u16 vsi_id, + const u32 *indir, u32 indir_size) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_vsi_info *vsi_info = NULL; + u32 *rss_ret; + u16 local_id; + int i = 0; + + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + if (!vsi_info) + return -ENOENT; + + if (indir) { + rss_ret = kcalloc(indir_size, sizeof(indir[0]), GFP_KERNEL); + if (!rss_ret) + return -ENOMEM; + /* local queue to global queue */ + for (i = 0; i < indir_size; i++) { + local_id = vsi_info->queue_offset + indir[i]; + rss_ret[i] = + nbl_res_queue_get_vsi_global_qid(res_mgt, + vsi_id, + local_id); + } + hw_ops->cfg_epro_rss_ret(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + vsi_info->rss_ret_base, + vsi_info->rss_entry_size, 0, NULL, + rss_ret); + kfree(rss_ret); + } + + if (!vsi_info->curr_qps) + hw_ops->set_epro_rss_pt(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id, + vsi_info->rss_ret_base, + vsi_info->rss_entry_size); + + return 0; +} + +static void nbl_res_queue_clear_queues(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + + nbl_res_queue_remove_rss(priv, vsi_id); + nbl_res_queue_remove_q2vsi(priv, vsi_id); + if (!queue_info->num_txrx_queues) + return; + nbl_res_queue_remove_cqs(res_mgt, vsi_id); + nbl_res_queue_cfg_dsch(res_mgt, vsi_id, false); + nbl_res_queue_remove_all_queues(res_mgt, vsi_id); + nbl_res_queue_free_txrx_queues(res_mgt, vsi_id); +} + +static u16 nbl_get_adapt_desc_gother_level(u16 last_level, u64 rates) +{ + switch (last_level) { + case NBL_ADAPT_DESC_GOTHER_L0: + if (rates > NBL_ADAPT_DESC_GOTHER_L1_TH) + return NBL_ADAPT_DESC_GOTHER_L1; + else + return NBL_ADAPT_DESC_GOTHER_L0; + case NBL_ADAPT_DESC_GOTHER_L1: + if (rates > NBL_ADAPT_DESC_GOTHER_L1_DOWNGRADE_TH) + return NBL_ADAPT_DESC_GOTHER_L1; + else + return NBL_ADAPT_DESC_GOTHER_L0; + default: + return NBL_ADAPT_DESC_GOTHER_L0; + } +} + +static u16 nbl_get_adapt_desc_gother_timeout(u16 level) +{ + switch (level) { + case NBL_ADAPT_DESC_GOTHER_L0: + return NBL_ADAPT_DESC_GOTHER_L0_TO; + case NBL_ADAPT_DESC_GOTHER_L1: + return NBL_ADAPT_DESC_GOTHER_L1_TO; + default: + return NBL_ADAPT_DESC_GOTHER_L0_TO; + } +} + +static void nbl_res_queue_adapt_desc_gother(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_adapt_desc_gother *adapt_desc_gother = + &queue_mgt->adapt_desc_gother; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u32 last_uvn_desc_rd_entry = adapt_desc_gother->uvn_desc_rd_entry; + u64 last_get_stats_jiffies = adapt_desc_gother->get_desc_stats_jiffies; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u64 time_diff; + u32 uvn_desc_rd_entry; + u32 rx_rate; + u16 level, last_level, timeout; + + last_level = adapt_desc_gother->level; + time_diff = jiffies - last_get_stats_jiffies; + uvn_desc_rd_entry = hw_ops->get_uvn_desc_entry_stats(p); + rx_rate = (uvn_desc_rd_entry - last_uvn_desc_rd_entry) / time_diff * HZ; + adapt_desc_gother->get_desc_stats_jiffies = jiffies; + adapt_desc_gother->uvn_desc_rd_entry = uvn_desc_rd_entry; + + level = nbl_get_adapt_desc_gother_level(last_level, rx_rate); + if (level != last_level) { + timeout = nbl_get_adapt_desc_gother_timeout(level); + hw_ops->set_uvn_desc_wr_timeout(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + timeout); + adapt_desc_gother->level = level; + } +} + +static void nbl_res_flr_clear_queues(void *priv, u16 vf_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = vf_id + NBL_MAX_PF; + u16 vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, + NBL_VSI_SERV_VF_DATA_TYPE); + + if (nbl_res_vf_is_active(priv, func_id)) + nbl_res_queue_clear_queues(priv, vsi_id); +} + +static int nbl_res_queue_stop_abnormal_hw_queue(void *priv, u16 vsi_id, + u16 local_queue_id, int type) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_info *queue_info; + u16 global_queue, func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + + queue_info = &queue_mgt->queue_info[func_id]; + global_queue = queue_info->txrx_queues[local_queue_id]; + switch (type) { + case NBL_TX: + hw_ops->lso_dsch_drain(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + hw_ops->disable_dvn(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + + hw_ops->reset_dvn_cfg(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + return 0; + case NBL_RX: + hw_ops->disable_uvn(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + hw_ops->rsc_cache_drain(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + + hw_ops->reset_uvn_cfg(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + global_queue); + nbl_res_queue_reset_uvn_pkt_drop_stats(res_mgt, func_id, + global_queue); + return 0; + default: + break; + } + + return -EINVAL; +} + +static int nbl_res_queue_set_tx_rate(void *priv, u16 func_id, int tx_rate, + int burst) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_resource_info *res_info = NBL_RES_MGT_TO_RES_INFO(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + struct nbl_queue_vsi_info *vsi_info = NULL; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u16 vsi_id, queue_id; + bool is_active = false; + int max_rate = 0, i; + + vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, + NBL_VSI_SERV_VF_DATA_TYPE); + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + + if (!vsi_info) + return 0; + + switch (res_info->board_info.eth_speed) { + case NBL_FW_PORT_SPEED_100G: + max_rate = NBL_RATE_MBPS_100G; + break; + case NBL_FW_PORT_SPEED_25G: + max_rate = NBL_RATE_MBPS_25G; + break; + case NBL_FW_PORT_SPEED_10G: + max_rate = NBL_RATE_MBPS_10G; + break; + default: + return -EOPNOTSUPP; + } + + if (tx_rate > max_rate) + return -EINVAL; + + if (queue_info->txrx_queues) + for (i = 0; i < vsi_info->curr_qps; i++) { + queue_id = + queue_info->txrx_queues[i + + vsi_info->queue_offset]; + is_active |= hw_ops->check_q2tc(p, queue_id); + } + + /* Config shaping */ + return hw_ops->set_shaping(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + vsi_info->net_id, tx_rate, burst, + !!(tx_rate), is_active); +} + +static int nbl_res_queue_set_rx_rate(void *priv, u16 func_id, int rx_rate, + int burst) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_resource_info *res_info = NBL_RES_MGT_TO_RES_INFO(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_queue_vsi_info *vsi_info = NULL; + u16 vsi_id; + int max_rate = 0; + + vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, NBL_VSI_DATA); + vsi_info = nbl_res_queue_get_vsi_info(res_mgt, vsi_id); + + if (!vsi_info) + return 0; + + switch (res_info->board_info.eth_speed) { + case NBL_FW_PORT_SPEED_100G: + max_rate = NBL_RATE_MBPS_100G; + break; + case NBL_FW_PORT_SPEED_25G: + max_rate = NBL_RATE_MBPS_25G; + break; + case NBL_FW_PORT_SPEED_10G: + max_rate = NBL_RATE_MBPS_10G; + break; + default: + return -EOPNOTSUPP; + } + + if (rx_rate > max_rate) + return -EINVAL; + + /* Config ucar */ + return hw_ops->set_ucar(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id, + rx_rate, burst, !!(rx_rate)); +} + +static void nbl_res_queue_get_active_func_bitmaps(void *priv, + unsigned long *bitmap, + int max_func) +{ + int i; + int func_id_end; + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + + func_id_end = max_func > NBL_MAX_FUNC ? NBL_MAX_FUNC : max_func; + for (i = 0; i < func_id_end; i++) { + if (!nbl_res_check_func_active_by_queue(res_mgt, i)) + continue; + + set_bit(i, bitmap); + } +} + +static int nbl_res_queue_reset_uvn_pkt_drop_stats(void *priv, u16 func_id, + u16 global_queue_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + struct nbl_queue_info *queue_info = &queue_mgt->queue_info[func_id]; + u16 vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, + NBL_VSI_SERV_PF_DATA_TYPE); + u16 local_queue_id; + + local_queue_id = nbl_res_queue_get_local_queue_id(res_mgt, vsi_id, + global_queue_id); + queue_info->uvn_stat_pkt_drop[local_queue_id] = 0; + return 0; +} + +/* NBL_QUEUE_SET_OPS(ops_name, func) + * + * Use X Macros to reduce setup and remove codes. + */ +#define NBL_QUEUE_OPS_TBL \ +do { \ + NBL_QUEUE_SET_OPS(alloc_txrx_queues, \ + nbl_res_queue_alloc_txrx_queues); \ + NBL_QUEUE_SET_OPS(free_txrx_queues, \ + nbl_res_queue_free_txrx_queues); \ + NBL_QUEUE_SET_OPS(register_vsi2q, nbl_res_queue_register_vsi2q);\ + NBL_QUEUE_SET_OPS(setup_q2vsi, nbl_res_queue_setup_q2vsi); \ + NBL_QUEUE_SET_OPS(remove_q2vsi, nbl_res_queue_remove_q2vsi); \ + NBL_QUEUE_SET_OPS(setup_rss, nbl_res_queue_setup_rss); \ + NBL_QUEUE_SET_OPS(remove_rss, nbl_res_queue_remove_rss); \ + NBL_QUEUE_SET_OPS(setup_queue, nbl_res_queue_setup_queue); \ + NBL_QUEUE_SET_OPS(remove_all_queues, nbl_res_queue_remove_all_queues);\ + NBL_QUEUE_SET_OPS(cfg_dsch, nbl_res_queue_cfg_dsch); \ + NBL_QUEUE_SET_OPS(setup_cqs, nbl_res_queue_setup_cqs); \ + NBL_QUEUE_SET_OPS(remove_cqs, nbl_res_queue_remove_cqs); \ + NBL_QUEUE_SET_OPS(queue_init, nbl_res_queue_init); \ + NBL_QUEUE_SET_OPS(get_rxfh_indir_size, \ + nbl_res_queue_get_rxfh_indir_size); \ + NBL_QUEUE_SET_OPS(set_rxfh_indir, nbl_res_queue_set_rxfh_indir);\ + NBL_QUEUE_SET_OPS(clear_queues, nbl_res_queue_clear_queues); \ + NBL_QUEUE_SET_OPS(get_vsi_global_queue_id, \ + nbl_res_queue_get_vsi_global_qid); \ + NBL_QUEUE_SET_OPS(adapt_desc_gother, \ + nbl_res_queue_adapt_desc_gother); \ + NBL_QUEUE_SET_OPS(flr_clear_queues, nbl_res_flr_clear_queues); \ + NBL_QUEUE_SET_OPS(get_local_queue_id, \ + nbl_res_queue_get_local_queue_id); \ + NBL_QUEUE_SET_OPS(set_tx_rate, nbl_res_queue_set_tx_rate); \ + NBL_QUEUE_SET_OPS(set_rx_rate, nbl_res_queue_set_rx_rate); \ + NBL_QUEUE_SET_OPS(stop_abnormal_hw_queue, \ + nbl_res_queue_stop_abnormal_hw_queue); \ + NBL_QUEUE_SET_OPS(get_active_func_bitmaps, \ + nbl_res_queue_get_active_func_bitmaps); \ +} while (0) + +int nbl_queue_setup_ops_leonis(struct nbl_resource_ops *res_ops) +{ +#define NBL_QUEUE_SET_OPS(name, func) \ + do { \ + res_ops->NBL_NAME(name) = func; \ + ; \ + } while (0) + NBL_QUEUE_OPS_TBL; +#undef NBL_QUEUE_SET_OPS + + return 0; +} + +void nbl_queue_remove_ops_leonis(struct nbl_resource_ops *res_ops) +{ +#define NBL_QUEUE_SET_OPS(name, func) \ +do { \ + (void)(func); \ + res_ops->NBL_NAME(name) = NULL; ; \ +} while (0) + NBL_QUEUE_OPS_TBL; +#undef NBL_QUEUE_SET_OPS +} + +void nbl_queue_mgt_init_leonis(struct nbl_queue_mgt *queue_mgt) +{ + queue_mgt->qid_map_select = NBL_MASTER_QID_MAP_TABLE; +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h new file mode 100644 index 000000000000..8af3f803b89a --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_queue_leonis.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_QUEUE_LEONIS_H_ +#define _NBL_QUEUE_LEONIS_H_ + +#include "nbl_resource.h" + +#define NBL_QID_MAP_NOTIFY_ADDR_SHIFT (9) +#define NBL_QID_MAP_NOTIFY_ADDR_LOW_PART_LEN (23) + +#define NBL_ADAPT_DESC_GOTHER_L1_TH (1000000) /* 1000k */ +#define NBL_ADAPT_DESC_GOTHER_L1_DOWNGRADE_TH (700000) /* 700k */ +#define NBL_ADAPT_DESC_GOTHER_L0 (0) +#define NBL_ADAPT_DESC_GOTHER_L1 (1) + +#define NBL_ADAPT_DESC_GOTHER_L0_TO (0x12c) +#define NBL_ADAPT_DESC_GOTHER_L1_TO (0x960) + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c index b4c6de135a26..161ba88a61c0 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c @@ -488,6 +488,10 @@ static struct nbl_resource_ops res_ops = { }; static struct nbl_res_product_ops product_ops = { + .queue_mgt_init = nbl_queue_mgt_init_leonis, + .setup_qid_map_table = nbl_res_queue_setup_qid_map_table_leonis, + .remove_qid_map_table = nbl_res_queue_remove_qid_map_table_leonis, + .init_qid_map_table = nbl_res_queue_init_qid_map_table, }; static bool is_ops_inited; @@ -546,7 +550,18 @@ static int nbl_res_setup_ops(struct device *dev, return -ENOMEM; if (!is_ops_inited) { + ret = nbl_queue_setup_ops_leonis(&res_ops); + if (ret) + goto setup_fail; ret = nbl_intr_setup_ops(&res_ops); + if (ret) + goto setup_fail; + + ret = nbl_vsi_setup_ops(&res_ops); + if (ret) + goto setup_fail; + + ret = nbl_adminq_setup_ops(&res_ops); if (ret) goto setup_fail; is_ops_inited = true; @@ -865,7 +880,10 @@ static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis) { struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt; + nbl_queue_mgt_stop(res_mgt); nbl_intr_mgt_stop(res_mgt); + nbl_adminq_mgt_stop(res_mgt); + nbl_vsi_mgt_stop(res_mgt); nbl_res_ctrl_dev_ustore_stats_remove(res_mgt); nbl_res_ctrl_dev_remove_vsi_info(res_mgt); nbl_res_ctrl_dev_remove_eth_info(res_mgt); @@ -918,6 +936,18 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis, if (ret) goto start_fail; + ret = nbl_queue_mgt_start(res_mgt); + if (ret) + goto start_fail; + + ret = nbl_vsi_mgt_start(res_mgt); + if (ret) + goto start_fail; + + ret = nbl_adminq_mgt_start(res_mgt); + if (ret) + goto start_fail; + ret = nbl_intr_mgt_start(res_mgt); if (ret) goto start_fail; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h index a0a25a2b71ee..3763c33db00f 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h @@ -10,4 +10,16 @@ #include "nbl_resource.h" #define NBL_MAX_PF_LEONIS 8 + +int nbl_queue_setup_ops_leonis(struct nbl_resource_ops *resource_ops); +void nbl_queue_remove_ops_leonis(struct nbl_resource_ops *resource_ops); + +void nbl_queue_mgt_init_leonis(struct nbl_queue_mgt *queue_mgt); +int nbl_res_queue_setup_qid_map_table_leonis(struct nbl_resource_mgt *res_mgt, + u16 func_id, u64 notify_addr); +void nbl_res_queue_remove_qid_map_table_leonis(struct nbl_resource_mgt *res_mgt, + u16 func_id); +int nbl_res_queue_init_qid_map_table(struct nbl_resource_mgt *res_mgt, + struct nbl_queue_mgt *queue_mgt, + struct nbl_hw_ops *hw_ops); #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c new file mode 100644 index 000000000000..35c2e34b30b6 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.c @@ -0,0 +1,60 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include "nbl_queue.h" + +/* Structure starts here, adding an op should not modify anything below */ +static int nbl_queue_setup_mgt(struct device *dev, + struct nbl_queue_mgt **queue_mgt) +{ + *queue_mgt = + devm_kzalloc(dev, sizeof(struct nbl_queue_mgt), GFP_KERNEL); + if (!*queue_mgt) + return -ENOMEM; + + return 0; +} + +static void nbl_queue_remove_mgt(struct device *dev, + struct nbl_queue_mgt **queue_mgt) +{ + devm_kfree(dev, *queue_mgt); + *queue_mgt = NULL; +} + +int nbl_queue_mgt_start(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_queue_mgt **queue_mgt; + struct nbl_res_product_ops *product_ops = + NBL_RES_MGT_TO_PROD_OPS(res_mgt); + int ret = 0; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + queue_mgt = &NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + + ret = nbl_queue_setup_mgt(dev, queue_mgt); + if (ret) + return ret; + + NBL_OPS_CALL(product_ops->queue_mgt_init, (*queue_mgt)); + + return 0; +} + +void nbl_queue_mgt_stop(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_queue_mgt **queue_mgt; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + queue_mgt = &NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + + if (!(*queue_mgt)) + return; + + nbl_queue_remove_mgt(dev, queue_mgt); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h new file mode 100644 index 000000000000..94a5b27f1bcb --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_queue.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_QUEUE_H_ +#define _NBL_QUEUE_H_ + +#include "nbl_resource.h" +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c index 22205e055100..e1f67ede651a 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.c @@ -285,6 +285,14 @@ static u8 eth_id_to_pf_id(void *p, u8 eth_id) return pf_id_offset + NBL_COMMON_TO_MGT_PF(common); } +static bool check_func_active_by_queue(void *p, u16 func_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)p; + struct nbl_queue_mgt *queue_mgt = NBL_RES_MGT_TO_QUEUE_MGT(res_mgt); + + return queue_mgt->queue_info[func_id].txrx_queues ? true : false; +} + int nbl_res_func_id_to_pfvfid(struct nbl_resource_mgt *res_mgt, u16 func_id, int *pfid, int *vfid) { @@ -373,6 +381,15 @@ u8 nbl_res_eth_id_to_pf_id(struct nbl_resource_mgt *res_mgt, u8 eth_id) return res_mgt->common_ops.eth_id_to_pf_id(res_mgt, eth_id); } +bool nbl_res_check_func_active_by_queue(struct nbl_resource_mgt *res_mgt, + u16 func_id) +{ + if (!res_mgt->common_ops.check_func_active_by_queue) + return check_func_active_by_queue(res_mgt, func_id); + + return res_mgt->common_ops.check_func_active_by_queue(res_mgt, func_id); +} + bool nbl_res_get_fix_capability(void *priv, enum nbl_fix_cap_type cap_type) { struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h index 5cbe0ebc4f89..de6307d13480 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h @@ -841,6 +841,8 @@ int nbl_res_func_id_to_bdf(struct nbl_resource_mgt *res_mgt, u16 func_id, u64 nbl_res_get_func_bar_base_addr(struct nbl_resource_mgt *res_mgt, u16 func_id); u8 nbl_res_vsi_id_to_eth_id(struct nbl_resource_mgt *res_mgt, u16 vsi_id); +bool nbl_res_check_func_active_by_queue(struct nbl_resource_mgt *res_mgt, + u16 func_id); int nbl_adminq_mgt_start(struct nbl_resource_mgt *res_mgt); void nbl_adminq_mgt_stop(struct nbl_resource_mgt *res_mgt); @@ -849,6 +851,14 @@ int nbl_adminq_setup_ops(struct nbl_resource_ops *resource_ops); int nbl_intr_mgt_start(struct nbl_resource_mgt *res_mgt); void nbl_intr_mgt_stop(struct nbl_resource_mgt *res_mgt); int nbl_intr_setup_ops(struct nbl_resource_ops *resource_ops); + +int nbl_queue_mgt_start(struct nbl_resource_mgt *res_mgt); +void nbl_queue_mgt_stop(struct nbl_resource_mgt *res_mgt); + +int nbl_vsi_mgt_start(struct nbl_resource_mgt *res_mgt); +void nbl_vsi_mgt_stop(struct nbl_resource_mgt *res_mgt); +int nbl_vsi_setup_ops(struct nbl_resource_ops *resource_ops); + bool nbl_res_get_fix_capability(void *priv, enum nbl_fix_cap_type cap_type); void nbl_res_set_fix_capability(struct nbl_resource_mgt *res_mgt, enum nbl_fix_cap_type cap_type); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c new file mode 100644 index 000000000000..84c6b481cfd0 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.c @@ -0,0 +1,168 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +#include <linux/etherdevice.h> + +#include "nbl_vsi.h" + +static int nbl_res_set_promisc_mode(void *priv, u16 vsi_id, u16 mode) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u16 pf_id = nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id); + u16 eth_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi_id); + + if (pf_id >= NBL_RES_MGT_TO_PF_NUM(res_mgt)) + return -EINVAL; + + hw_ops->set_promisc_mode(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id, + eth_id, mode); + + return 0; +} + +static u16 nbl_res_get_vf_function_id(void *priv, u16 vsi_id, int vfid) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 vf_vsi; + int pfid = nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id); + + vf_vsi = vfid == -1 ? vsi_id : + nbl_res_pfvfid_to_vsi_id(res_mgt, pfid, vfid, + NBL_VSI_DATA); + + return nbl_res_vsi_id_to_func_id(res_mgt, vf_vsi); +} + +static u16 nbl_res_get_vf_vsi_id(void *priv, u16 vsi_id, int vfid) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 vf_vsi; + int pfid = nbl_res_vsi_id_to_pf_id(res_mgt, vsi_id); + + vf_vsi = vfid == -1 ? vsi_id : + nbl_res_pfvfid_to_vsi_id(res_mgt, pfid, vfid, + NBL_VSI_DATA); + return vf_vsi; +} + +static void nbl_res_vsi_deinit_chip_module(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + hw_ops->deinit_chip_module(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); +} + +static int nbl_res_vsi_init_chip_module(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u8 eth_speed = res_mgt->resource_info->board_info.eth_speed; + u8 eth_num = res_mgt->resource_info->board_info.eth_num; + + int ret = 0; + + if (!res_mgt) + return -EINVAL; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + ret = hw_ops->init_chip_module(p, eth_speed, eth_num); + + return ret; +} + +static int nbl_res_vsi_init(void *priv) +{ + return 0; +} + +static int nbl_res_get_link_forced(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_resource_info *resource_info = + NBL_RES_MGT_TO_RES_INFO(res_mgt); + u16 func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi_id); + + if (func_id >= NBL_MAX_FUNC) + return -EINVAL; + + return resource_info->link_forced_info[func_id]; +} + +/* NBL_VSI_SET_OPS(ops_name, func) + * + * Use X Macros to reduce setup and remove codes. + */ +#define NBL_VSI_OPS_TBL \ +do { \ + NBL_VSI_SET_OPS(init_chip_module, \ + nbl_res_vsi_init_chip_module); \ + NBL_VSI_SET_OPS(deinit_chip_module, \ + nbl_res_vsi_deinit_chip_module); \ + NBL_VSI_SET_OPS(vsi_init, nbl_res_vsi_init); \ + NBL_VSI_SET_OPS(set_promisc_mode, nbl_res_set_promisc_mode); \ + NBL_VSI_SET_OPS(get_vf_function_id, \ + nbl_res_get_vf_function_id); \ + NBL_VSI_SET_OPS(get_vf_vsi_id, nbl_res_get_vf_vsi_id); \ + NBL_VSI_SET_OPS(get_link_forced, nbl_res_get_link_forced); \ +} while (0) + +/* Structure starts here, adding an op should not modify anything below */ +static int nbl_vsi_setup_mgt(struct device *dev, struct nbl_vsi_mgt **vsi_mgt) +{ + *vsi_mgt = devm_kzalloc(dev, sizeof(struct nbl_vsi_mgt), GFP_KERNEL); + if (!*vsi_mgt) + return -ENOMEM; + + return 0; +} + +static void nbl_vsi_remove_mgt(struct device *dev, struct nbl_vsi_mgt **vsi_mgt) +{ + devm_kfree(dev, *vsi_mgt); + *vsi_mgt = NULL; +} + +int nbl_vsi_mgt_start(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_vsi_mgt **vsi_mgt; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + vsi_mgt = &NBL_RES_MGT_TO_VSI_MGT(res_mgt); + + return nbl_vsi_setup_mgt(dev, vsi_mgt); +} + +void nbl_vsi_mgt_stop(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_vsi_mgt **vsi_mgt; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + vsi_mgt = &NBL_RES_MGT_TO_VSI_MGT(res_mgt); + + if (!(*vsi_mgt)) + return; + + nbl_vsi_remove_mgt(dev, vsi_mgt); +} + +int nbl_vsi_setup_ops(struct nbl_resource_ops *res_ops) +{ +#define NBL_VSI_SET_OPS(name, func) \ + do { \ + res_ops->NBL_NAME(name) = func; \ + ; \ + } while (0) + NBL_VSI_OPS_TBL; +#undef NBL_VSI_SET_OPS + + return 0; +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h new file mode 100644 index 000000000000..94831e00b89a --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_vsi.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_VSI_H_ +#define _NBL_VSI_H_ + +#include "nbl_resource.h" + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h index ee4194ab7252..b8f49cc75bc8 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h @@ -10,6 +10,57 @@ #include "nbl_include.h" struct nbl_hw_ops { + int (*init_chip_module)(void *priv, u8 eth_speed, u8 eth_num); + void (*deinit_chip_module)(void *priv); + int (*init_qid_map_table)(void *priv); + int (*set_qid_map_table)(void *priv, void *data, int qid_map_select); + int (*set_qid_map_ready)(void *priv, bool ready); + int (*cfg_ipro_queue_tbl)(void *priv, u16 queue_id, u16 vsi_id, + u8 enable); + int (*cfg_ipro_dn_sport_tbl)(void *priv, u16 vsi_id, u16 dst_eth_id, + u16 bmode, bool binit); + int (*set_vnet_queue_info)(void *priv, + struct nbl_vnet_queue_info_param *param, + u16 queue_id); + int (*clear_vnet_queue_info)(void *priv, u16 queue_id); + int (*reset_dvn_cfg)(void *priv, u16 queue_id); + int (*reset_uvn_cfg)(void *priv, u16 queue_id); + int (*restore_dvn_context)(void *priv, u16 queue_id, u16 split, + u16 last_avail_index); + int (*restore_uvn_context)(void *priv, u16 queue_id, u16 split, + u16 last_avail_index); + int (*get_tx_queue_cfg)(void *priv, void *data, u16 queue_id); + int (*get_rx_queue_cfg)(void *priv, void *data, u16 queue_id); + int (*cfg_tx_queue)(void *priv, void *data, u16 queue_id); + int (*cfg_rx_queue)(void *priv, void *data, u16 queue_id); + bool (*check_q2tc)(void *priv, u16 queue_id); + int (*cfg_q2tc_netid)(void *priv, u16 queue_id, u16 netid, u16 vld); + int (*set_shaping)(void *priv, u16 func_id, u64 total_tx_rate, + u64 burst, u8 vld, bool active); + void (*active_shaping)(void *priv, u16 func_id); + void (*deactive_shaping)(void *priv, u16 func_id); + int (*set_ucar)(void *priv, u16 func_id, u64 total_tx_rate, u64 burst, + u8 vld); + int (*cfg_dsch_net_to_group)(void *priv, u16 func_id, u16 group_id, + u16 vld); + int (*init_epro_rss_key)(void *priv); + + int (*init_epro_vpt_tbl)(void *priv, u16 vsi_id); + int (*cfg_epro_rss_ret)(void *priv, u32 index, u8 size_type, u32 q_num, + u16 *queue_list, const u32 *indir); + int (*set_epro_rss_pt)(void *priv, u16 vsi_id, u16 rss_ret_base, + u16 rss_entry_size); + int (*clear_epro_rss_pt)(void *priv, u16 vsi_id); + int (*disable_dvn)(void *priv, u16 queue_id); + int (*disable_uvn)(void *priv, u16 queue_id); + int (*lso_dsch_drain)(void *priv, u16 queue_id); + int (*rsc_cache_drain)(void *priv, u16 queue_id); + u16 (*save_dvn_ctx)(void *priv, u16 queue_id, u16 split); + u16 (*save_uvn_ctx)(void *priv, u16 queue_id, u16 split, + u16 queue_size); + void (*setup_queue_switch)(void *priv, u16 eth_id); + void (*init_pfc)(void *priv, u8 ether_ports); + void (*set_promisc_mode)(void *priv, u16 vsi_id, u16 eth_id, u16 mode); void (*configure_msix_map)(void *priv, u16 func_id, bool valid, dma_addr_t dma_addr, u8 bus, u8 devid, u8 function); @@ -55,6 +106,7 @@ struct nbl_hw_ops { bool (*check_adminq_dma_err)(void *priv, bool tx); u8 __iomem *(*get_hw_addr)(void *priv, size_t *size); + int (*set_sfp_state)(void *priv, u8 eth_id, u8 state); void (*set_hw_status)(void *priv, enum nbl_hw_status hw_status); enum nbl_hw_status (*get_hw_status)(void *priv); void (*set_fw_ping)(void *priv, u32 ping); @@ -62,6 +114,9 @@ struct nbl_hw_ops { void (*set_fw_pong)(void *priv, u32 pong); int (*process_abnormal_event)(void *priv, struct nbl_abnormal_event_info *info); + u32 (*get_uvn_desc_entry_stats)(void *priv); + void (*set_uvn_desc_wr_timeout)(void *priv, u16 timeout); + /* for board cfg */ u32 (*get_fw_eth_num)(void *priv); u32 (*get_fw_eth_map)(void *priv); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index 134704229116..934612c12fc1 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -64,6 +64,11 @@ enum { NBL_VSI_MAX, }; +enum { + NBL_TX = 0, + NBL_RX, +}; + enum nbl_hw_status { NBL_HW_NOMAL, /* Most hw module is not work nomal exclude pcie/emp */ @@ -117,6 +122,15 @@ struct nbl_qid_map_param { u16 len; }; +struct nbl_vnet_queue_info_param { + u32 function_id; + u32 device_id; + u32 bus_id; + u32 msix_idx; + u32 msix_idx_valid; + u32 valid; +}; + struct nbl_queue_cfg_param { /* queue args*/ u64 desc; @@ -213,6 +227,99 @@ struct nbl_hw_stats { struct nbl_ustore_stats start_ustore_stats; }; +enum nbl_port_type { + NBL_PORT_TYPE_UNKNOWN = 0, + NBL_PORT_TYPE_FIBRE, + NBL_PORT_TYPE_COPPER, +}; + +enum nbl_port_max_rate { + NBL_PORT_MAX_RATE_UNKNOWN = 0, + NBL_PORT_MAX_RATE_1G, + NBL_PORT_MAX_RATE_10G, + NBL_PORT_MAX_RATE_25G, + NBL_PORT_MAX_RATE_100G, + NBL_PORT_MAX_RATE_100G_PAM4, +}; + +#define NBL_PORT_CAP_AUTONEG_MASK (BIT(NBL_PORT_CAP_AUTONEG)) +#define NBL_PORT_CAP_FEC_MASK \ + (BIT(NBL_PORT_CAP_FEC_OFF) | BIT(NBL_PORT_CAP_FEC_RS) | \ + BIT(NBL_PORT_CAP_FEC_BASER)) +#define NBL_PORT_CAP_PAUSE_MASK \ + (BIT(NBL_PORT_CAP_TX_PAUSE) | BIT(NBL_PORT_CAP_RX_PAUSE)) +#define NBL_PORT_CAP_SPEED_1G_MASK \ + (BIT(NBL_PORT_CAP_1000BASE_T) | BIT(NBL_PORT_CAP_1000BASE_X)) +#define NBL_PORT_CAP_SPEED_10G_MASK \ + (BIT(NBL_PORT_CAP_10GBASE_T) | BIT(NBL_PORT_CAP_10GBASE_KR) | \ + BIT(NBL_PORT_CAP_10GBASE_SR)) +#define NBL_PORT_CAP_SPEED_25G_MASK \ + (BIT(NBL_PORT_CAP_25GBASE_KR) | BIT(NBL_PORT_CAP_25GBASE_SR) | \ + BIT(NBL_PORT_CAP_25GBASE_CR) | BIT(NBL_PORT_CAP_25G_AUI)) +#define NBL_PORT_CAP_SPEED_50G_MASK \ + (BIT(NBL_PORT_CAP_50GBASE_KR2) | BIT(NBL_PORT_CAP_50GBASE_SR2) |\ + BIT(NBL_PORT_CAP_50GBASE_CR2) | BIT(NBL_PORT_CAP_50G_AUI2) | \ + BIT(NBL_PORT_CAP_50GBASE_KR_PAM4) | \ + BIT(NBL_PORT_CAP_50GBASE_SR_PAM4) | \ + BIT(NBL_PORT_CAP_50GBASE_CR_PAM4) | BIT(NBL_PORT_CAP_50G_AUI_PAM4)) +#define NBL_PORT_CAP_SPEED_100G_MASK \ + (BIT(NBL_PORT_CAP_100GBASE_KR4) | BIT(NBL_PORT_CAP_100GBASE_SR4) |\ + BIT(NBL_PORT_CAP_100GBASE_CR4) | BIT(NBL_PORT_CAP_100G_AUI4) |\ + BIT(NBL_PORT_CAP_100G_CAUI4) | BIT(NBL_PORT_CAP_100GBASE_KR2_PAM4) |\ + BIT(NBL_PORT_CAP_100GBASE_SR2_PAM4) | \ + BIT(NBL_PORT_CAP_100GBASE_CR2_PAM4) | \ + BIT(NBL_PORT_CAP_100G_AUI2_PAM4)) +#define NBL_PORT_CAP_SPEED_MASK \ + (NBL_PORT_CAP_SPEED_1G_MASK | NBL_PORT_CAP_SPEED_10G_MASK | \ + NBL_PORT_CAP_SPEED_25G_MASK | NBL_PORT_CAP_SPEED_50G_MASK | \ + NBL_PORT_CAP_SPEED_100G_MASK) +#define NBL_PORT_CAP_PAM4_MASK \ + (BIT(NBL_PORT_CAP_50GBASE_KR_PAM4) | \ + BIT(NBL_PORT_CAP_50GBASE_SR_PAM4) | \ + BIT(NBL_PORT_CAP_50GBASE_CR_PAM4) | BIT(NBL_PORT_CAP_50G_AUI_PAM4) |\ + BIT(NBL_PORT_CAP_100GBASE_KR2_PAM4) | \ + BIT(NBL_PORT_CAP_100GBASE_SR2_PAM4) | \ + BIT(NBL_PORT_CAP_100GBASE_CR2_PAM4) | \ + BIT(NBL_PORT_CAP_100G_AUI2_PAM4)) + +enum nbl_port_cap { + NBL_PORT_CAP_TX_PAUSE, + NBL_PORT_CAP_RX_PAUSE, + NBL_PORT_CAP_AUTONEG, + NBL_PORT_CAP_FEC_NONE, + NBL_PORT_CAP_FEC_OFF = NBL_PORT_CAP_FEC_NONE, + NBL_PORT_CAP_FEC_RS, + NBL_PORT_CAP_FEC_BASER, + NBL_PORT_CAP_1000BASE_T, + NBL_PORT_CAP_1000BASE_X, + NBL_PORT_CAP_10GBASE_T, + NBL_PORT_CAP_10GBASE_KR, + NBL_PORT_CAP_10GBASE_SR, + NBL_PORT_CAP_25GBASE_KR, + NBL_PORT_CAP_25GBASE_SR, + NBL_PORT_CAP_25GBASE_CR, + NBL_PORT_CAP_25G_AUI, + NBL_PORT_CAP_50GBASE_KR2, + NBL_PORT_CAP_50GBASE_SR2, + NBL_PORT_CAP_50GBASE_CR2, + NBL_PORT_CAP_50G_AUI2, + NBL_PORT_CAP_50GBASE_KR_PAM4, + NBL_PORT_CAP_50GBASE_SR_PAM4, + NBL_PORT_CAP_50GBASE_CR_PAM4, + NBL_PORT_CAP_50G_AUI_PAM4, + NBL_PORT_CAP_100GBASE_KR4, + NBL_PORT_CAP_100GBASE_SR4, + NBL_PORT_CAP_100GBASE_CR4, + NBL_PORT_CAP_100G_AUI4, + NBL_PORT_CAP_100G_CAUI4, + NBL_PORT_CAP_100GBASE_KR2_PAM4, + NBL_PORT_CAP_100GBASE_SR2_PAM4, + NBL_PORT_CAP_100GBASE_CR2_PAM4, + NBL_PORT_CAP_100G_AUI2_PAM4, + NBL_PORT_CAP_FEC_AUTONEG, + NBL_PORT_CAP_MAX +}; + enum nbl_fw_port_speed { NBL_FW_PORT_SPEED_10G, NBL_FW_PORT_SPEED_25G, @@ -236,6 +343,31 @@ struct nbl_cmd_net_ring_num { u16 net_max_qp_num[NBL_NET_RING_NUM_CMD_LEN]; }; +#define NBL_VF_NUM_CMD_LEN (8) +struct nbl_cmd_vf_num { + u32 valid; + u16 vf_max_num[NBL_VF_NUM_CMD_LEN]; +}; + +#define NBL_OPS_CALL(func, para) \ +do { \ + typeof(func) _func = (func); \ + if (_func) \ + _func para; \ +} while (0) + +#define NBL_OPS_CALL_RET(func, para) \ +({ \ + typeof(func) _func = (func); \ + _func ? _func para : 0; \ +}) + +#define NBL_OPS_CALL_RET_PTR(func, para) \ +({ \ + typeof(func) _func = (func); \ + _func ? _func para : NULL; \ +}) + enum { NBL_NETIF_F_SG_BIT, /* Scatter/gather IO. */ NBL_NETIF_F_IP_CSUM_BIT, /* csum TCP/UDP over IPv4 */ @@ -298,6 +430,8 @@ struct nbl_ring_param { u16 queue_size; }; +#define NBL_VSI_MAX_ID 1024 + struct nbl_mtu_entry { u32 ref_count; u16 mtu_value; -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v2 net-next 08/15] net/nebula-matrix: add vsi, queue, adminq resource definitions and implementation 2026-01-09 10:01 ` [PATCH v2 net-next 08/15] net/nebula-matrix: add vsi, queue, adminq " illusion.wang @ 2026-01-09 18:38 ` Andrew Lunn 0 siblings, 0 replies; 19+ messages in thread From: Andrew Lunn @ 2026-01-09 18:38 UTC (permalink / raw) To: illusion.wang Cc: dimon.zhao, alvin.wang, sam.chen, netdev, andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list > +static s32 nbl_res_aq_get_module_bitrate(struct nbl_resource_mgt *res_mgt, > + u8 eth_id) > +{ > + struct device *dev = NBL_COMMON_TO_DEV(res_mgt->common); > + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); > + u8 data[SFF_8472_SIGNALING_RATE_MAX + 1]; > + u32 result; > + u8 br_nom; > + u8 br_max; > + u8 identifier; > + u8 encoding = 0; > + int port_max_rate; > + int ret; > + > + if (res_mgt->resource_info->board_info.eth_speed == > + NBL_FW_PORT_SPEED_100G) { > + ret = nbl_res_aq_turn_module_eeprom_page(res_mgt, eth_id, 0); > + if (ret) { > + dev_err(dev, > + "eth %d get_module_eeprom_info failed %d\n", > + eth_info->logic_eth_id[eth_id], ret); > + return NBL_PORT_MAX_RATE_UNKNOWN; > + } > + } > + > + ret = nbl_res_aq_get_module_eeprom(res_mgt, eth_id, I2C_DEV_ADDR_A0, 0, > + 0, 0, > + SFF_8472_SIGNALING_RATE_MAX + 1, > + data); > + if (ret) { > + dev_err(dev, "eth %d get_module_eeprom_info failed %d\n", > + eth_info->logic_eth_id[eth_id], ret); > + return NBL_PORT_MAX_RATE_UNKNOWN; > + } > + > + if (res_mgt->resource_info->board_info.eth_speed == > + NBL_FW_PORT_SPEED_100G) { > + ret = nbl_res_aq_get_module_eeprom(res_mgt, eth_id, > + I2C_DEV_ADDR_A0, 0, 0, > + SFF_8636_VENDOR_ENCODING, 1, > + &encoding); > + if (ret) { > + dev_err(dev, > + "eth %d get_module_eeprom_info failed %d\n", > + eth_info->logic_eth_id[eth_id], ret); > + return NBL_PORT_MAX_RATE_UNKNOWN; > + } > + } > + > + br_nom = data[SFF_8472_SIGNALING_RATE]; > + br_max = data[SFF_8472_SIGNALING_RATE_MAX]; > + identifier = data[SFF_8472_IDENTIFIER]; > + > + /* sff-8472 section 5.6 */ > + if (br_nom == 255) > + result = (u32)br_max * 250; > + else if (br_nom == 0) > + result = 0; > + else > + result = (u32)br_nom * 100; > + > + switch (result / 1000) { > + case 25: > + port_max_rate = NBL_PORT_MAX_RATE_25G; > + break; > + case 10: > + port_max_rate = NBL_PORT_MAX_RATE_10G; > + break; > + case 1: > + port_max_rate = NBL_PORT_MAX_RATE_1G; > + break; > + default: > + port_max_rate = NBL_PORT_MAX_RATE_UNKNOWN; > + break; > + } > + > + if (identifier == SFF_IDENTIFIER_QSFP28) > + port_max_rate = NBL_PORT_MAX_RATE_100G; > + > + if (identifier == SFF_IDENTIFIER_PAM4 || > + encoding == SFF_8636_ENCODING_PAM4) > + port_max_rate = NBL_PORT_MAX_RATE_100G_PAM4; > + > + return port_max_rate; > +} Please could you pull everything dealing with the SFP into a patch of its own. We will want to review this code and think about if you should be using phylink. Do you also have a PCS which the driver is configuring? If so, please make that a separate patch as well. Andrew ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 09/15] net/nebula-matrix: add flow resource definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (7 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 08/15] net/nebula-matrix: add vsi, queue, adminq " illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 10/15] net/nebula-matrix: add txrx " illusion.wang ` (6 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list flow resource management functions include: Flow Configuration: Setting the actions and key-value pairs for flows. Flow Management: Allocating/releasing flow IDs, TCAM IDs, MCC IDs, etc. Multicast Control: Managing multicast control groups. Hash Table Management: Enabling rapid lookup of flow entries. LLDP/LACP Flow Management: Managing flows related to link-layer protocols. Multicast Flow Management: Managing multicast flows. MTU Management: Managing the MTU of Virtual Switching Instances (VSIs). Initialization and Cleanup: Initializing/cleaning up the flow management module. Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 1 + .../nbl_hw/nbl_hw_leonis/nbl_flow_leonis.c | 2268 +++++++++++++++++ .../nbl_hw/nbl_hw_leonis/nbl_flow_leonis.h | 204 ++ .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 519 ++++ .../nbl_hw_leonis/nbl_resource_leonis.c | 10 + .../nbl_hw_leonis/nbl_resource_leonis.h | 3 + .../nbl/nbl_include/nbl_def_common.h | 87 + .../nbl/nbl_include/nbl_def_hw.h | 18 + 8 files changed, 3110 insertions(+) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index e611110ac369..16d751e01b8e 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_NBL_CORE) := nbl_core.o nbl_core-objs += nbl_common/nbl_common.o \ nbl_channel/nbl_channel.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis.o \ + nbl_hw/nbl_hw_leonis/nbl_flow_leonis.o \ nbl_hw/nbl_hw_leonis/nbl_queue_leonis.o \ nbl_hw/nbl_hw_leonis/nbl_resource_leonis.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.c new file mode 100644 index 000000000000..62681d64c3e0 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.c @@ -0,0 +1,2268 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +#include <linux/etherdevice.h> +#include <linux/if_vlan.h> + +#include "nbl_flow_leonis.h" +#include "nbl_resource_leonis.h" + +#define NBL_ACT_SET_AUX_FIELD 1 +#define NBL_ACT_SET_DPORT 9 +#define NBL_ACT_SET_MCC 13 +#define NBL_FLOW_LEONIS_VSI_NUM_PER_ETH 256 + +static u32 nbl_flow_cfg_action_set_dport(u16 upcall_flag, u16 port_type, + u16 vsi, u16 next_stg_sel) +{ + union nbl_action_data set_dport = { .data = 0 }; + + set_dport.dport.up.upcall_flag = upcall_flag; + set_dport.dport.up.port_type = port_type; + set_dport.dport.up.port_id = vsi; + set_dport.dport.up.next_stg_sel = next_stg_sel; + + return set_dport.data + (NBL_ACT_SET_DPORT << 16); +} + +static u16 nbl_flow_cfg_action_set_dport_mcc_eth(u8 eth) +{ + union nbl_action_data set_dport = { .data = 0 }; + + set_dport.dport.down.upcall_flag = AUX_FWD_TYPE_NML_FWD; + set_dport.dport.down.port_type = SET_DPORT_TYPE_ETH_LAG; + set_dport.dport.down.next_stg_sel = NEXT_STG_SEL_EPRO; + set_dport.dport.down.lag_vld = 0; + set_dport.dport.down.eth_vld = 1; + set_dport.dport.down.eth_id = eth; + + return set_dport.data; +} + +static u16 nbl_flow_cfg_action_set_dport_mcc_vsi(u16 vsi) +{ + union nbl_action_data set_dport = { .data = 0 }; + + set_dport.dport.up.upcall_flag = AUX_FWD_TYPE_NML_FWD; + set_dport.dport.up.port_type = SET_DPORT_TYPE_VSI_HOST; + set_dport.dport.up.port_id = vsi; + set_dport.dport.up.next_stg_sel = NEXT_STG_SEL_ACL_S0; + + return set_dport.data; +} + +static u32 nbl_flow_cfg_action_set_dport_mcc_bmc(void) +{ + union nbl_action_data set_dport = { .data = 0 }; + + set_dport.dport.up.upcall_flag = AUX_FWD_TYPE_NML_FWD; + set_dport.dport.up.port_type = SET_DPORT_TYPE_SP_PORT; + set_dport.dport.up.port_id = NBL_FLOW_MCC_BMC_DPORT; + set_dport.dport.up.next_stg_sel = NEXT_STG_SEL_EPRO; + + return set_dport.data + (NBL_ACT_SET_DPORT << 16); +} + +static int nbl_flow_cfg_action_mcc(u16 mcc_id, u32 *action0, u32 *action1) +{ + union nbl_action_data mcc_idx_act = { .data = 0 }, + set_aux_act = { .data = 0 }; + + mcc_idx_act.mcc_idx.mcc_id = mcc_id; + *action0 = (u32)mcc_idx_act.data + (NBL_ACT_SET_MCC << 16); + + set_aux_act.set_aux.sub_id = NBL_SET_AUX_SET_AUX; + set_aux_act.set_aux.nstg_vld = 1; + set_aux_act.set_aux.nstg_val = NBL_NEXT_STG_MCC; + *action1 = (u32)set_aux_act.data + (NBL_ACT_SET_AUX_FIELD << 16); + + return 0; +} + +static int nbl_flow_cfg_action_up_tnl(struct nbl_flow_param param, u32 *action0, + u32 *action1) +{ + *action1 = 0; + if (param.mcc_id == NBL_MCC_ID_INVALID) + *action0 = + nbl_flow_cfg_action_set_dport(AUX_FWD_TYPE_NML_FWD, + SET_DPORT_TYPE_VSI_HOST, + param.vsi, + NEXT_STG_SEL_ACL_S0); + else + nbl_flow_cfg_action_mcc(param.mcc_id, action0, action1); + + return 0; +} + +static int nbl_flow_cfg_action_lldp_lacp_up(struct nbl_flow_param param, + u32 *action0, u32 *action1) +{ + *action1 = 0; + *action0 = nbl_flow_cfg_action_set_dport(AUX_FWD_TYPE_NML_FWD, + SET_DPORT_TYPE_VSI_HOST, + param.vsi, + NEXT_STG_SEL_ACL_S0); + + return 0; +} + +static int nbl_flow_cfg_action_up(struct nbl_flow_param param, u32 *action0, + u32 *action1) +{ + *action1 = 0; + if (param.mcc_id == NBL_MCC_ID_INVALID) + *action0 = + nbl_flow_cfg_action_set_dport(AUX_FWD_TYPE_NML_FWD, + SET_DPORT_TYPE_VSI_HOST, + param.vsi, + NEXT_STG_SEL_NONE); + else + nbl_flow_cfg_action_mcc(param.mcc_id, action0, action1); + + return 0; +} + +static int nbl_flow_cfg_action_down(struct nbl_flow_param param, u32 *action0, + u32 *action1) +{ + *action1 = 0; + if (param.mcc_id == NBL_MCC_ID_INVALID) + *action0 = + nbl_flow_cfg_action_set_dport(AUX_FWD_TYPE_NML_FWD, + SET_DPORT_TYPE_VSI_HOST, + param.vsi, + NEXT_STG_SEL_ACL_S0); + else + nbl_flow_cfg_action_mcc(param.mcc_id, action0, action1); + + return 0; +} + +static int nbl_flow_cfg_up_tnl_key_value(union nbl_common_data_u *data, + struct nbl_flow_param param, + u8 eth_mode) +{ + union nbl_l2_hw_up_data_u *kt_data = (union nbl_l2_hw_up_data_u *)data; + u64 dst_mac = 0; + u8 sport; + u8 reverse_mac[ETH_ALEN]; + + nbl_convert_mac(param.mac, reverse_mac); + + memset(kt_data->hash_key, 0x0, sizeof(kt_data->hash_key)); + ether_addr_copy((u8 *)&dst_mac, reverse_mac); + + kt_data->info.dst_mac = dst_mac; + kt_data->info.svlan_id = param.vid; + kt_data->info.template = NBL_EM0_PT_HW_UP_TUNNEL_L2; + kt_data->info.padding = 0; + + sport = param.eth; + kt_data->info.sport = sport + NBL_SPORT_ETH_OFFSET; + + return 0; +} + +static int nbl_flow_cfg_lldp_lacp_up_key_value(union nbl_common_data_u *data, + struct nbl_flow_param param, + u8 eth_mode) +{ + union nbl_l2_hw_lldp_lacp_data_u *kt_data = + (union nbl_l2_hw_lldp_lacp_data_u *)data; + u8 sport; + + kt_data->info.template = NBL_EM0_PT_HW_UP_LLDP_LACP; + + kt_data->info.ether_type = param.ether_type; + + sport = param.eth; + kt_data->info.sport = sport + NBL_SPORT_ETH_OFFSET; + + return 0; +} + +static int nbl_flow_cfg_up_key_value(union nbl_common_data_u *data, + struct nbl_flow_param param, u8 eth_mode) +{ + union nbl_l2_hw_up_data_u *kt_data = (union nbl_l2_hw_up_data_u *)data; + u64 dst_mac = 0; + u8 sport; + u8 reverse_mac[ETH_ALEN]; + + nbl_convert_mac(param.mac, reverse_mac); + + memset(kt_data->hash_key, 0x0, sizeof(kt_data->hash_key)); + ether_addr_copy((u8 *)&dst_mac, reverse_mac); + + kt_data->info.dst_mac = dst_mac; + kt_data->info.svlan_id = param.vid; + kt_data->info.template = NBL_EM0_PT_HW_UP_L2; + kt_data->info.padding = 0; + + sport = param.eth; + kt_data->info.sport = sport + NBL_SPORT_ETH_OFFSET; + + return 0; +} + +static int nbl_flow_cfg_down_key_value(union nbl_common_data_u *data, + struct nbl_flow_param param, u8 eth_mode) +{ + union nbl_l2_hw_down_data_u *kt_data = + (union nbl_l2_hw_down_data_u *)data; + u64 dst_mac = 0; + u8 sport; + u8 reverse_mac[ETH_ALEN]; + + nbl_convert_mac(param.mac, reverse_mac); + + memset(kt_data->hash_key, 0x0, sizeof(kt_data->hash_key)); + ether_addr_copy((u8 *)&dst_mac, reverse_mac); + + kt_data->info.dst_mac = dst_mac; + kt_data->info.svlan_id = param.vid; + kt_data->info.template = NBL_EM0_PT_HW_DOWN_L2; + kt_data->info.padding = 0; + + sport = param.vsi >> 8; + if (eth_mode == NBL_TWO_ETHERNET_PORT) + sport &= 0xFE; + if (eth_mode == NBL_ONE_ETHERNET_PORT) + sport = 0; + kt_data->info.sport = sport; + + return 0; +} + +static void nbl_flow_cfg_kt_action_up_tnl(union nbl_common_data_u *data, + u32 action0, u32 action1) +{ + union nbl_l2_hw_up_data_u *kt_data = (union nbl_l2_hw_up_data_u *)data; + + kt_data->info.act0 = action0; + kt_data->info.act1 = action1; +} + +static void nbl_flow_cfg_kt_action_lldp_lacp_up(union nbl_common_data_u *data, + u32 action0, u32 action1) +{ + union nbl_l2_hw_lldp_lacp_data_u *kt_data = + (union nbl_l2_hw_lldp_lacp_data_u *)data; + + kt_data->info.act0 = action0; +} + +static void nbl_flow_cfg_kt_action_up(union nbl_common_data_u *data, + u32 action0, u32 action1) +{ + union nbl_l2_hw_up_data_u *kt_data = (union nbl_l2_hw_up_data_u *)data; + + kt_data->info.act0 = action0; + kt_data->info.act1 = action1; +} + +static void nbl_flow_cfg_kt_action_down(union nbl_common_data_u *data, + u32 action0, u32 action1) +{ + union nbl_l2_hw_down_data_u *kt_data = + (union nbl_l2_hw_down_data_u *)data; + + kt_data->info.act0 = action0; + kt_data->info.act1 = action1; +} + +static int nbl_flow_cfg_action_multi_mcast(struct nbl_flow_param param, + u32 *action0, u32 *action1) +{ + return nbl_flow_cfg_action_mcc(param.mcc_id, action0, action1); +} + +static int +nbl_flow_cfg_l2up_multi_mcast_key_value(union nbl_common_data_u *data, + struct nbl_flow_param param, + u8 eth_mode) +{ + union nbl_l2_hw_up_multi_mcast_data_u *kt_data = + (union nbl_l2_hw_up_multi_mcast_data_u *)data; + u8 sport; + + kt_data->info.template = NBL_EM0_PT_HW_L2_UP_MULTI_MCAST; + + sport = param.eth; + kt_data->info.sport = sport + NBL_SPORT_ETH_OFFSET; + + return 0; +} + +static void +nbl_flow_cfg_kt_action_l2up_multi_mcast(union nbl_common_data_u *data, + u32 action0, u32 action1) +{ + union nbl_l2_hw_up_multi_mcast_data_u *kt_data = + (union nbl_l2_hw_up_multi_mcast_data_u *)data; + + kt_data->info.act0 = action0; +} + +static int +nbl_flow_cfg_l3up_multi_mcast_key_value(union nbl_common_data_u *data, + struct nbl_flow_param param, + u8 eth_mode) +{ + union nbl_l2_hw_up_multi_mcast_data_u *kt_data = + (union nbl_l2_hw_up_multi_mcast_data_u *)data; + u8 sport; + + kt_data->info.template = NBL_EM0_PT_HW_L3_UP_MULTI_MCAST; + + sport = param.eth; + kt_data->info.sport = sport + NBL_SPORT_ETH_OFFSET; + + return 0; +} + +static int +nbl_flow_cfg_l2down_multi_mcast_key_value(union nbl_common_data_u *data, + struct nbl_flow_param param, + u8 eth_mode) +{ + union nbl_l2_hw_down_multi_mcast_data_u *kt_data = + (union nbl_l2_hw_down_multi_mcast_data_u *)data; + u8 sport; + + kt_data->info.template = NBL_EM0_PT_HW_L2_DOWN_MULTI_MCAST; + + sport = param.eth; + kt_data->info.sport = sport + NBL_SPORT_ETH_OFFSET; + + return 0; +} + +static void +nbl_flow_cfg_kt_action_l2down_multi_mcast(union nbl_common_data_u *data, + u32 action0, u32 action1) +{ + union nbl_l2_hw_down_multi_mcast_data_u *kt_data = + (union nbl_l2_hw_down_multi_mcast_data_u *)data; + + kt_data->info.act0 = action0; +} + +static int +nbl_flow_cfg_l3down_multi_mcast_key_value(union nbl_common_data_u *data, + struct nbl_flow_param param, + u8 eth_mode) +{ + union nbl_l2_hw_down_multi_mcast_data_u *kt_data = + (union nbl_l2_hw_down_multi_mcast_data_u *)data; + u8 sport; + + kt_data->info.template = NBL_EM0_PT_HW_L3_DOWN_MULTI_MCAST; + + sport = param.eth; + kt_data->info.sport = sport + NBL_SPORT_ETH_OFFSET; + + return 0; +} + +#define NBL_FLOW_OPS_ARR_ENTRY(type, action_func, kt_func, kt_action_func) \ + [type] = {.cfg_action = action_func, .cfg_key = kt_func, \ + .cfg_kt_action = kt_action_func} +static const struct nbl_flow_rule_cfg_ops cfg_ops[] = { + NBL_FLOW_OPS_ARR_ENTRY(NBL_FLOW_UP_TNL, + nbl_flow_cfg_action_up_tnl, + nbl_flow_cfg_up_tnl_key_value, + nbl_flow_cfg_kt_action_up_tnl), + NBL_FLOW_OPS_ARR_ENTRY(NBL_FLOW_UP, + nbl_flow_cfg_action_up, + nbl_flow_cfg_up_key_value, + nbl_flow_cfg_kt_action_up), + NBL_FLOW_OPS_ARR_ENTRY(NBL_FLOW_DOWN, + nbl_flow_cfg_action_down, + nbl_flow_cfg_down_key_value, + nbl_flow_cfg_kt_action_down), + NBL_FLOW_OPS_ARR_ENTRY(NBL_FLOW_LLDP_LACP_UP, + nbl_flow_cfg_action_lldp_lacp_up, + nbl_flow_cfg_lldp_lacp_up_key_value, + nbl_flow_cfg_kt_action_lldp_lacp_up), + NBL_FLOW_OPS_ARR_ENTRY(NBL_FLOW_L2_UP_MULTI_MCAST, + nbl_flow_cfg_action_multi_mcast, + nbl_flow_cfg_l2up_multi_mcast_key_value, + nbl_flow_cfg_kt_action_l2up_multi_mcast), + NBL_FLOW_OPS_ARR_ENTRY(NBL_FLOW_L3_UP_MULTI_MCAST, + nbl_flow_cfg_action_multi_mcast, + nbl_flow_cfg_l3up_multi_mcast_key_value, + nbl_flow_cfg_kt_action_l2up_multi_mcast), + NBL_FLOW_OPS_ARR_ENTRY(NBL_FLOW_L2_DOWN_MULTI_MCAST, + nbl_flow_cfg_action_multi_mcast, + nbl_flow_cfg_l2down_multi_mcast_key_value, + nbl_flow_cfg_kt_action_l2down_multi_mcast), + NBL_FLOW_OPS_ARR_ENTRY(NBL_FLOW_L3_DOWN_MULTI_MCAST, + nbl_flow_cfg_action_multi_mcast, + nbl_flow_cfg_l3down_multi_mcast_key_value, + nbl_flow_cfg_kt_action_l2down_multi_mcast), +}; + +static int nbl_flow_alloc_flow_id(struct nbl_flow_mgt *flow_mgt, + struct nbl_flow_fem_entry *flow) +{ + u32 flow_id; + + if (flow->flow_type == NBL_KT_HALF_MODE) { + flow_id = find_first_zero_bit(flow_mgt->flow_id_bitmap, + NBL_MACVLAN_TABLE_LEN); + if (flow_id == NBL_MACVLAN_TABLE_LEN) + return -ENOSPC; + set_bit(flow_id, flow_mgt->flow_id_bitmap); + flow_mgt->flow_id_cnt--; + } else { + flow_id = nbl_common_find_free_idx(flow_mgt->flow_id_bitmap, + NBL_MACVLAN_TABLE_LEN, + 2, 2); + if (flow_id == NBL_MACVLAN_TABLE_LEN) + return -ENOSPC; + set_bit(flow_id, flow_mgt->flow_id_bitmap); + set_bit(flow_id + 1, flow_mgt->flow_id_bitmap); + flow_mgt->flow_id_cnt -= 2; + } + + flow->flow_id = flow_id; + return 0; +} + +static void nbl_flow_free_flow_id(struct nbl_flow_mgt *flow_mgt, + struct nbl_flow_fem_entry *flow) +{ + if (flow->flow_id == U16_MAX) + return; + + if (flow->flow_type == NBL_KT_HALF_MODE) { + clear_bit(flow->flow_id, flow_mgt->flow_id_bitmap); + flow->flow_id = 0xFFFF; + flow_mgt->flow_id_cnt++; + } else { + clear_bit(flow->flow_id, flow_mgt->flow_id_bitmap); + clear_bit(flow->flow_id + 1, flow_mgt->flow_id_bitmap); + flow->flow_id = 0xFFFF; + flow_mgt->flow_id_cnt += 2; + } +} + +static int nbl_flow_alloc_tcam_id(struct nbl_flow_mgt *flow_mgt, + struct nbl_tcam_item *tcam_item) +{ + u32 tcam_id; + + tcam_id = find_first_zero_bit(flow_mgt->tcam_id, NBL_TCAM_TABLE_LEN); + if (tcam_id == NBL_TCAM_TABLE_LEN) + return -ENOSPC; + + set_bit(tcam_id, flow_mgt->tcam_id); + tcam_item->tcam_index = tcam_id; + + return 0; +} + +static void nbl_flow_free_tcam_id(struct nbl_flow_mgt *flow_mgt, + struct nbl_tcam_item *tcam_item) +{ + clear_bit(tcam_item->tcam_index, flow_mgt->tcam_id); + tcam_item->tcam_index = 0; +} + +static int nbl_flow_alloc_mcc_id(struct nbl_flow_mgt *flow_mgt) +{ + u32 mcc_id; + + mcc_id = find_first_zero_bit(flow_mgt->mcc_id_bitmap, + NBL_FLOW_MCC_INDEX_SIZE); + if (mcc_id == NBL_FLOW_MCC_INDEX_SIZE) + return -ENOSPC; + + set_bit(mcc_id, flow_mgt->mcc_id_bitmap); + + return mcc_id + NBL_FLOW_MCC_INDEX_START; +} + +static void nbl_flow_free_mcc_id(struct nbl_flow_mgt *flow_mgt, u32 mcc_id) +{ + if (mcc_id >= NBL_FLOW_MCC_INDEX_START) + clear_bit(mcc_id - NBL_FLOW_MCC_INDEX_START, + flow_mgt->mcc_id_bitmap); +} + +static void nbl_flow_set_mt_input(struct nbl_mt_input *mt_input, + union nbl_common_data_u *kt_data, u8 type, + u16 flow_id) +{ + int i; + u16 key_len; + + key_len = ((type) == NBL_KT_HALF_MODE ? NBL_KT_BYTE_HALF_LEN : + NBL_KT_BYTE_LEN); + for (i = 0; i < key_len; i++) + mt_input->key[i] = kt_data->hash_key[key_len - 1 - i]; + + mt_input->tbl_id = flow_id + NBL_EM_HW_KT_OFFSET; + mt_input->depth = 0; + mt_input->power = NBL_PP0_POWER; +} + +static void nbl_flow_key_hash(struct nbl_flow_fem_entry *flow, + struct nbl_mt_input *mt_input) +{ + u16 ht0_hash = 0; + u16 ht1_hash = 0; + + ht0_hash = NBL_CRC16_CCITT(mt_input->key, NBL_KT_BYTE_LEN); + ht1_hash = NBL_CRC16_IBM(mt_input->key, NBL_KT_BYTE_LEN); + flow->ht0_hash = + nbl_hash_transfer(ht0_hash, mt_input->power, mt_input->depth); + flow->ht1_hash = + nbl_hash_transfer(ht1_hash, mt_input->power, mt_input->depth); +} + +static bool nbl_pp_ht0_ht1_search(struct nbl_flow_ht_mng *pp_ht0_mng, + u16 ht0_hash, + struct nbl_flow_ht_mng *pp_ht1_mng, + u16 ht1_hash, struct nbl_common_info *common) +{ + struct nbl_flow_ht_tbl *node0 = NULL; + struct nbl_flow_ht_tbl *node1 = NULL; + u16 i = 0; + bool is_find = false; + + node0 = pp_ht0_mng->hash_map[ht0_hash]; + if (node0) + for (i = 0; i < NBL_HASH_CFT_MAX; i++) + if (node0->key[i].vid && + node0->key[i].ht_other_index == ht1_hash) { + is_find = true; + nbl_debug(common, + "Conflicted ht on vid %d and kt_index %u\n", + node0->key[i].vid, + node0->key[i].kt_index); + return is_find; + } + + node1 = pp_ht1_mng->hash_map[ht1_hash]; + if (node1) + for (i = 0; i < NBL_HASH_CFT_MAX; i++) + if (node1->key[i].vid && + node1->key[i].ht_other_index == ht0_hash) { + is_find = true; + nbl_debug(common, + "Conflicted ht on vid %d and kt_index %u\n", + node1->key[i].vid, + node1->key[i].kt_index); + return is_find; + } + + return is_find; +} + +static bool nbl_flow_check_ht_conflict(struct nbl_flow_ht_mng *pp_ht0_mng, + struct nbl_flow_ht_mng *pp_ht1_mng, + u16 ht0_hash, u16 ht1_hash, + struct nbl_common_info *common) +{ + return nbl_pp_ht0_ht1_search(pp_ht0_mng, ht0_hash, pp_ht1_mng, ht1_hash, + common); +} + +static int nbl_flow_find_ht_avail_table(struct nbl_flow_ht_mng *pp_ht0_mng, + struct nbl_flow_ht_mng *pp_ht1_mng, + u16 ht0_hash, u16 ht1_hash) +{ + struct nbl_flow_ht_tbl *pp_ht0_node = NULL; + struct nbl_flow_ht_tbl *pp_ht1_node = NULL; + + pp_ht0_node = pp_ht0_mng->hash_map[ht0_hash]; + pp_ht1_node = pp_ht1_mng->hash_map[ht1_hash]; + + if (!pp_ht0_node && !pp_ht1_node) { + return 0; + } else if (pp_ht0_node && !pp_ht1_node) { + if (pp_ht0_node->ref_cnt >= NBL_HASH_CFT_AVL) + return 1; + else + return 0; + } else if (!pp_ht0_node && pp_ht1_node) { + if (pp_ht1_node->ref_cnt >= NBL_HASH_CFT_AVL) + return 0; + else + return 1; + } else { + if ((pp_ht0_node->ref_cnt <= NBL_HASH_CFT_AVL || + (pp_ht0_node->ref_cnt > NBL_HASH_CFT_AVL && + pp_ht0_node->ref_cnt < NBL_HASH_CFT_MAX && + pp_ht1_node->ref_cnt > NBL_HASH_CFT_AVL))) + return 0; + else if (((pp_ht0_node->ref_cnt > NBL_HASH_CFT_AVL && + pp_ht1_node->ref_cnt <= NBL_HASH_CFT_AVL) || + (pp_ht0_node->ref_cnt == NBL_HASH_CFT_MAX && + pp_ht1_node->ref_cnt > NBL_HASH_CFT_AVL && + pp_ht1_node->ref_cnt < NBL_HASH_CFT_MAX))) + return 1; + else + return -1; + } +} + +static int nbl_flow_insert_pp_ht(struct nbl_flow_ht_mng *pp_ht_mng, u16 hash, + u16 hash_other, u32 key_index) +{ + struct nbl_flow_ht_tbl *node; + int i; + + node = pp_ht_mng->hash_map[hash]; + if (!node) { + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return -ENOSPC; + pp_ht_mng->hash_map[hash] = node; + } + + for (i = 0; i < NBL_HASH_CFT_MAX; i++) { + if (node->key[i].vid == 0) { + node->key[i].vid = 1; + node->key[i].ht_other_index = hash_other; + node->key[i].kt_index = key_index; + node->ref_cnt++; + break; + } + } + + return i; +} + +static void nbl_flow_add_ht(struct nbl_ht_item *ht_item, + struct nbl_flow_fem_entry *flow, u32 key_index, + struct nbl_flow_ht_mng *pp_ht_mng, u8 ht_table) +{ + u16 ht_hash; + u16 ht_other_hash; + + ht_hash = ht_table == NBL_HT0 ? flow->ht0_hash : flow->ht1_hash; + ht_other_hash = ht_table == NBL_HT0 ? flow->ht1_hash : flow->ht0_hash; + + ht_item->hash_bucket = nbl_flow_insert_pp_ht(pp_ht_mng, ht_hash, + ht_other_hash, key_index); + if (ht_item->hash_bucket < 0) + return; + + ht_item->ht_table = ht_table; + ht_item->key_index = key_index; + ht_item->ht0_hash = flow->ht0_hash; + ht_item->ht1_hash = flow->ht1_hash; + + flow->hash_bucket = ht_item->hash_bucket; + flow->hash_table = ht_item->ht_table; +} + +static void nbl_flow_del_ht(struct nbl_ht_item *ht_item, + struct nbl_flow_fem_entry *flow, + struct nbl_flow_ht_mng *pp_ht_mng) +{ + struct nbl_flow_ht_tbl *pp_ht_node = NULL; + u16 ht_hash; + u16 ht_other_hash; + int i; + + ht_hash = ht_item->ht_table == NBL_HT0 ? flow->ht0_hash : + flow->ht1_hash; + ht_other_hash = ht_item->ht_table == NBL_HT0 ? flow->ht1_hash : + flow->ht0_hash; + + pp_ht_node = pp_ht_mng->hash_map[ht_hash]; + if (!pp_ht_node) + return; + + for (i = 0; i < NBL_HASH_CFT_MAX; i++) { + if (pp_ht_node->key[i].vid == 1 && + pp_ht_node->key[i].ht_other_index == ht_other_hash) { + memset(&pp_ht_node->key[i], 0, + sizeof(pp_ht_node->key[i])); + pp_ht_node->ref_cnt--; + break; + } + } + + if (!pp_ht_node->ref_cnt) { + kfree(pp_ht_node); + pp_ht_mng->hash_map[ht_hash] = NULL; + } +} + +static int nbl_flow_send_2hw(struct nbl_resource_mgt *res_mgt, + struct nbl_ht_item ht_item, + struct nbl_kt_item kt_item, u8 key_type) +{ + struct nbl_hw_ops *hw_ops; + u16 hash, hash_other; + int ret = 0; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + ret = hw_ops->set_kt(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + kt_item.kt_data.hash_key, ht_item.key_index, + key_type); + if (ret) + goto set_kt_fail; + + hash = ht_item.ht_table == NBL_HT0 ? ht_item.ht0_hash : + ht_item.ht1_hash; + hash_other = ht_item.ht_table == NBL_HT0 ? ht_item.ht1_hash : + ht_item.ht0_hash; + ret = hw_ops->set_ht(NBL_RES_MGT_TO_HW_PRIV(res_mgt), hash, hash_other, + ht_item.ht_table, ht_item.hash_bucket, + ht_item.key_index, 1); + if (ret) + goto set_ht_fail; + + ret = hw_ops->search_key(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + kt_item.kt_data.hash_key, key_type); + if (ret) + goto search_fail; + + return 0; + +search_fail: + ret = hw_ops->set_ht(NBL_RES_MGT_TO_HW_PRIV(res_mgt), hash, 0, + ht_item.ht_table, ht_item.hash_bucket, 0, 0); +set_ht_fail: + memset(kt_item.kt_data.hash_key, 0, sizeof(kt_item.kt_data.hash_key)); + hw_ops->set_kt(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + kt_item.kt_data.hash_key, ht_item.key_index, key_type); +set_kt_fail: + return ret; +} + +static int nbl_flow_del_2hw(struct nbl_resource_mgt *res_mgt, + struct nbl_ht_item ht_item, + struct nbl_kt_item kt_item, u8 key_type) +{ + struct nbl_hw_ops *hw_ops; + u16 hash; + + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + hash = ht_item.ht_table == NBL_HT0 ? ht_item.ht0_hash : + ht_item.ht1_hash; + hw_ops->set_ht(NBL_RES_MGT_TO_HW_PRIV(res_mgt), hash, 0, + ht_item.ht_table, ht_item.hash_bucket, 0, 0); + + return 0; +} + +static void nbl_flow_cfg_tcam(struct nbl_tcam_item *tcam_item, + struct nbl_ht_item *ht_item, + struct nbl_kt_item *kt_item, u32 action0, + u32 action1) +{ + tcam_item->key_mode = NBL_KT_HALF_MODE; + tcam_item->pp_type = NBL_PT_PP0; + tcam_item->tcam_action[0] = action0; + tcam_item->tcam_action[1] = action1; + memcpy(&tcam_item->ht_item, ht_item, sizeof(struct nbl_ht_item)); + memcpy(&tcam_item->kt_item, kt_item, sizeof(struct nbl_kt_item)); +} + +static int nbl_flow_add_tcam(struct nbl_resource_mgt *res_mgt, + struct nbl_tcam_item tcam_item) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + return hw_ops->add_tcam(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + tcam_item.tcam_index, + tcam_item.kt_item.kt_data.hash_key, + tcam_item.tcam_action, tcam_item.key_mode, + NBL_PT_PP0); +} + +static void nbl_flow_del_tcam(struct nbl_resource_mgt *res_mgt, + struct nbl_tcam_item tcam_item) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + hw_ops->del_tcam(NBL_RES_MGT_TO_HW_PRIV(res_mgt), tcam_item.tcam_index, + tcam_item.key_mode, NBL_PT_PP0); +} + +static int nbl_flow_add_flow(struct nbl_resource_mgt *res_mgt, + struct nbl_flow_param param, s32 type, + struct nbl_flow_fem_entry *flow) +{ + struct nbl_flow_mgt *flow_mgt; + struct nbl_common_info *common; + struct nbl_mt_input mt_input; + struct nbl_ht_item ht_item; + struct nbl_kt_item kt_item; + struct nbl_tcam_item *tcam_item = NULL; + struct nbl_flow_ht_mng *pp_ht_mng = NULL; + u32 action0, action1; + int ht_table; + int ret = 0; + + memset(&mt_input, 0, sizeof(mt_input)); + memset(&ht_item, 0, sizeof(ht_item)); + memset(&kt_item, 0, sizeof(kt_item)); + + tcam_item = kzalloc(sizeof(*tcam_item), GFP_ATOMIC); + if (!tcam_item) + return -ENOMEM; + + flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + common = NBL_RES_MGT_TO_COMMON(res_mgt); + + flow->flow_type = param.type; + flow->type = type; + flow->flow_id = 0xFFFF; + + ret = nbl_flow_alloc_flow_id(flow_mgt, flow); + if (ret) + goto free_mem; + + ret = cfg_ops[type].cfg_action(param, &action0, &action1); + if (ret) + goto free_mem; + + ret = cfg_ops[type].cfg_key(&kt_item.kt_data, param, + NBL_COMMON_TO_ETH_MODE(common)); + if (ret) + goto free_mem; + + nbl_flow_set_mt_input(&mt_input, &kt_item.kt_data, param.type, + flow->flow_id); + nbl_flow_key_hash(flow, &mt_input); + + if (nbl_flow_check_ht_conflict(&flow_mgt->pp0_ht0_mng, + &flow_mgt->pp0_ht1_mng, flow->ht0_hash, + flow->ht1_hash, common)) + flow->tcam_flag = true; + + ht_table = nbl_flow_find_ht_avail_table(&flow_mgt->pp0_ht0_mng, + &flow_mgt->pp0_ht1_mng, + flow->ht0_hash, flow->ht1_hash); + if (ht_table < 0) + flow->tcam_flag = true; + + if (!flow->tcam_flag) { + pp_ht_mng = ht_table == NBL_HT0 ? &flow_mgt->pp0_ht0_mng : + &flow_mgt->pp0_ht1_mng; + nbl_flow_add_ht(&ht_item, flow, mt_input.tbl_id, pp_ht_mng, + ht_table); + + cfg_ops[type].cfg_kt_action(&kt_item.kt_data, action0, action1); + ret = nbl_flow_send_2hw(res_mgt, ht_item, kt_item, param.type); + } else { + ret = nbl_flow_alloc_tcam_id(flow_mgt, tcam_item); + if (ret) + goto out; + + nbl_flow_cfg_tcam(tcam_item, &ht_item, &kt_item, action0, + action1); + flow->tcam_index = tcam_item->tcam_index; + + ret = nbl_flow_add_tcam(res_mgt, *tcam_item); + } + +out: + if (ret) { + if (flow->tcam_flag) + nbl_flow_free_tcam_id(flow_mgt, tcam_item); + else + nbl_flow_del_ht(&ht_item, flow, pp_ht_mng); + + nbl_flow_free_flow_id(flow_mgt, flow); + } + +free_mem: + kfree(tcam_item); + + return ret; +} + +static void nbl_flow_del_flow(struct nbl_resource_mgt *res_mgt, + struct nbl_flow_fem_entry *flow) +{ + struct nbl_flow_mgt *flow_mgt; + struct nbl_ht_item ht_item; + struct nbl_kt_item kt_item; + struct nbl_tcam_item tcam_item; + struct nbl_flow_ht_mng *pp_ht_mng = NULL; + + if (flow->flow_id == 0xFFFF) + return; + + memset(&ht_item, 0, sizeof(ht_item)); + memset(&kt_item, 0, sizeof(kt_item)); + memset(&tcam_item, 0, sizeof(tcam_item)); + + flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + + if (!flow->tcam_flag) { + ht_item.ht_table = flow->hash_table; + ht_item.ht0_hash = flow->ht0_hash; + ht_item.ht1_hash = flow->ht1_hash; + ht_item.hash_bucket = flow->hash_bucket; + + pp_ht_mng = flow->hash_table == NBL_HT0 ? + &flow_mgt->pp0_ht0_mng : + &flow_mgt->pp0_ht1_mng; + + nbl_flow_del_ht(&ht_item, flow, pp_ht_mng); + nbl_flow_del_2hw(res_mgt, ht_item, kt_item, flow->flow_type); + } else { + tcam_item.tcam_index = flow->tcam_index; + nbl_flow_del_tcam(res_mgt, tcam_item); + nbl_flow_free_tcam_id(flow_mgt, &tcam_item); + } + + nbl_flow_free_flow_id(flow_mgt, flow); +} + +static struct nbl_flow_mcc_node * +nbl_flow_alloc_mcc_node(struct nbl_flow_mgt *flow_mgt, u8 type, u16 data, + u16 head) +{ + struct nbl_flow_mcc_node *node; + int mcc_id; + u16 mcc_action; + + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return NULL; + + mcc_id = nbl_flow_alloc_mcc_id(flow_mgt); + if (mcc_id < 0) { + kfree(node); + return NULL; + } + + switch (type) { + case NBL_MCC_INDEX_ETH: + mcc_action = nbl_flow_cfg_action_set_dport_mcc_eth((u8)data); + break; + case NBL_MCC_INDEX_VSI: + mcc_action = nbl_flow_cfg_action_set_dport_mcc_vsi(data); + break; + case NBL_MCC_INDEX_BMC: + mcc_action = nbl_flow_cfg_action_set_dport_mcc_bmc(); + break; + default: + nbl_flow_free_mcc_id(flow_mgt, mcc_id); + kfree(node); + return NULL; + } + + INIT_LIST_HEAD(&node->node); + node->mcc_id = mcc_id; + node->mcc_head = head; + node->type = type; + node->data = data; + node->mcc_action = mcc_action; + + return node; +} + +static void nbl_flow_free_mcc_node(struct nbl_flow_mgt *flow_mgt, + struct nbl_flow_mcc_node *node) +{ + nbl_flow_free_mcc_id(flow_mgt, node->mcc_id); + kfree(node); +} + +/* not consider multicast node first change, need modify all macvlan mcc */ +static int nbl_flow_add_mcc_node(struct nbl_resource_mgt *res_mgt, + struct nbl_flow_mcc_node *mcc_node, + struct list_head *head, struct list_head *list, + struct list_head *suffix) +{ + struct nbl_flow_mcc_node *mcc_head = NULL; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u16 prev_mcc_id, next_mcc_id = NBL_MCC_ID_INVALID; + int ret = 0; + + /* mcc_head must init before mcc_list */ + if (mcc_node->mcc_head) { + list_add_tail(&mcc_node->node, head); + prev_mcc_id = NBL_MCC_ID_INVALID; + + WARN_ON(!list_empty(list)); + ret = hw_ops->add_mcc(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + mcc_node->mcc_id, prev_mcc_id, + NBL_MCC_ID_INVALID, mcc_node->mcc_action); + goto check_ret; + } + + list_add_tail(&mcc_node->node, list); + + if (list_is_first(&mcc_node->node, list)) + prev_mcc_id = NBL_MCC_ID_INVALID; + else + prev_mcc_id = list_prev_entry(mcc_node, node)->mcc_id; + + /* not head, next mcc may point suffix */ + if (suffix && !list_empty(suffix)) + next_mcc_id = + list_first_entry(suffix, struct nbl_flow_mcc_node, node) + ->mcc_id; + else + next_mcc_id = NBL_MCC_ID_INVALID; + + /* first add mcc_list */ + if (prev_mcc_id == NBL_MCC_ID_INVALID && !list_empty(head)) { + list_for_each_entry(mcc_head, head, node) { + prev_mcc_id = mcc_head->mcc_id; + ret |= hw_ops->add_mcc(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + mcc_node->mcc_id, prev_mcc_id, + next_mcc_id, + mcc_node->mcc_action); + } + goto check_ret; + } + + ret = hw_ops->add_mcc(NBL_RES_MGT_TO_HW_PRIV(res_mgt), mcc_node->mcc_id, + prev_mcc_id, next_mcc_id, mcc_node->mcc_action); +check_ret: + if (ret) { + list_del(&mcc_node->node); + return -EINVAL; + } + + return 0; +} + +/* not consider multicast node first change, need modify all macvlan mcc */ +static void nbl_flow_del_mcc_node(struct nbl_resource_mgt *res_mgt, + struct nbl_flow_mcc_node *mcc_node, + struct list_head *head, + struct list_head *list, + struct list_head *suffix) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_flow_mcc_node *mcc_head = NULL; + u16 prev_mcc_id, next_mcc_id; + + if (list_entry_is_head(mcc_node, head, node) || + list_entry_is_head(mcc_node, list, node)) + return; + + if (mcc_node->mcc_head) { + WARN_ON(!list_empty(list)); + prev_mcc_id = NBL_MCC_ID_INVALID; + next_mcc_id = NBL_MCC_ID_INVALID; + hw_ops->del_mcc(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + mcc_node->mcc_id, prev_mcc_id, next_mcc_id); + goto free_node; + } + + if (list_is_first(&mcc_node->node, list)) + prev_mcc_id = NBL_MCC_ID_INVALID; + else + prev_mcc_id = list_prev_entry(mcc_node, node)->mcc_id; + + if (list_is_last(&mcc_node->node, list)) + next_mcc_id = NBL_MCC_ID_INVALID; + else + next_mcc_id = list_next_entry(mcc_node, node)->mcc_id; + + /* not head, next mcc may point suffix */ + if (next_mcc_id == NBL_MCC_ID_INVALID && suffix && !list_empty(suffix)) + next_mcc_id = + list_first_entry(suffix, struct nbl_flow_mcc_node, node) + ->mcc_id; + + if (prev_mcc_id == NBL_MCC_ID_INVALID && !list_empty(head)) { + list_for_each_entry(mcc_head, head, node) { + prev_mcc_id = mcc_head->mcc_id; + hw_ops->del_mcc(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + mcc_node->mcc_id, prev_mcc_id, + next_mcc_id); + } + goto free_node; + } + + hw_ops->del_mcc(NBL_RES_MGT_TO_HW_PRIV(res_mgt), mcc_node->mcc_id, + prev_mcc_id, next_mcc_id); +free_node: + list_del(&mcc_node->node); +} + +static struct nbl_flow_mcc_group * +nbl_flow_alloc_mcc_group(struct nbl_resource_mgt *res_mgt, + unsigned long *vsi_bitmap, u16 eth_id, bool multi, + u16 vsi_num) +{ + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_switch_res *res = &flow_mgt->switch_res[eth_id]; + struct nbl_flow_mcc_group *group; + struct nbl_flow_mcc_node *mcc_node, *mcc_node_safe; + int ret; + int bit; + + /* The structure for mc macvlan list is: + * + * macvlan up + * | + * | + * BMC -> | + * VSI 0 -> VSI 1 -> -> allmulti list + * ETH -> | + * | + * | + * macvlan down + * + * So that the up mc pkts will be send to BMC, not need broadcast to + *eth, but the down mc pkts will send to eth, not send to BMC. + * Per mac flow entry has independent bmc/eth mcc nodes. + * All mac flow entry share all allmuti vsi nodes. + */ + group = kzalloc(sizeof(*group), GFP_KERNEL); + if (!group) + return NULL; + + group->vsi_base = eth_id * NBL_FLOW_LEONIS_VSI_NUM_PER_ETH; + group->multi = multi; + group->nbits = flow_mgt->vsi_max_per_switch; + group->ref_cnt = 1; + group->vsi_num = vsi_num; + + INIT_LIST_HEAD(&group->group_node); + INIT_LIST_HEAD(&group->mcc_node); + INIT_LIST_HEAD(&group->mcc_head); + + group->vsi_bitmap = kcalloc(BITS_TO_LONGS(flow_mgt->vsi_max_per_switch), + sizeof(long), GFP_KERNEL); + if (!group->vsi_bitmap) + goto alloc_vsi_bitmap_failed; + + bitmap_copy(group->vsi_bitmap, vsi_bitmap, + flow_mgt->vsi_max_per_switch); + if (!multi) + goto add_mcc_node; + + mcc_node = + nbl_flow_alloc_mcc_node(flow_mgt, NBL_MCC_INDEX_ETH, eth_id, 1); + if (!mcc_node) + goto free_nodes; + + ret = nbl_flow_add_mcc_node(res_mgt, mcc_node, &group->mcc_head, + &group->mcc_node, NULL); + if (ret) { + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + goto free_nodes; + } + + group->down_mcc_id = mcc_node->mcc_id; + mcc_node = nbl_flow_alloc_mcc_node(flow_mgt, NBL_MCC_INDEX_BMC, + NBL_FLOW_MCC_BMC_DPORT, 1); + if (!mcc_node) + goto free_nodes; + + ret = nbl_flow_add_mcc_node(res_mgt, mcc_node, &group->mcc_head, + &group->mcc_node, NULL); + if (ret) { + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + goto free_nodes; + } + group->up_mcc_id = mcc_node->mcc_id; + +add_mcc_node: + for_each_set_bit(bit, vsi_bitmap, flow_mgt->vsi_max_per_switch) { + mcc_node = nbl_flow_alloc_mcc_node(flow_mgt, NBL_MCC_INDEX_VSI, + bit + group->vsi_base, 0); + if (!mcc_node) + goto free_nodes; + + if (multi) + ret = nbl_flow_add_mcc_node(res_mgt, mcc_node, + &group->mcc_head, + &group->mcc_node, + &res->allmulti_list); + else + ret = nbl_flow_add_mcc_node(res_mgt, mcc_node, + &group->mcc_head, + &group->mcc_node, NULL); + + if (ret) { + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + goto free_nodes; + } + } + + if (list_empty(&group->mcc_head)) { + group->down_mcc_id = list_first_entry(&group->mcc_node, + struct nbl_flow_mcc_node, + node) + ->mcc_id; + group->up_mcc_id = list_first_entry(&group->mcc_node, + struct nbl_flow_mcc_node, + node) + ->mcc_id; + } + list_add_tail(&group->group_node, &res->mcc_group_head); + + return group; + +free_nodes: + list_for_each_entry_safe(mcc_node, mcc_node_safe, &group->mcc_node, + node) { + nbl_flow_del_mcc_node(res_mgt, mcc_node, &group->mcc_head, + &group->mcc_node, NULL); + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + } + + list_for_each_entry_safe(mcc_node, mcc_node_safe, &group->mcc_head, + node) { + nbl_flow_del_mcc_node(res_mgt, mcc_node, &group->mcc_head, + &group->mcc_node, NULL); + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + } + kfree(group->vsi_bitmap); +alloc_vsi_bitmap_failed: + kfree(group); + + return NULL; +} + +static void nbl_flow_free_mcc_group(struct nbl_resource_mgt *res_mgt, + struct nbl_flow_mcc_group *group) +{ + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_mcc_node *mcc_node, *mcc_node_safe; + + group->ref_cnt--; + if (group->ref_cnt) + return; + + list_del(&group->group_node); + list_for_each_entry_safe(mcc_node, mcc_node_safe, &group->mcc_node, + node) { + nbl_flow_del_mcc_node(res_mgt, mcc_node, &group->mcc_head, + &group->mcc_node, NULL); + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + } + + list_for_each_entry_safe(mcc_node, mcc_node_safe, &group->mcc_head, + node) { + nbl_flow_del_mcc_node(res_mgt, mcc_node, &group->mcc_head, + &group->mcc_node, NULL); + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + } + + kfree(group->vsi_bitmap); + kfree(group); +} + +static struct nbl_flow_mcc_group * +nbl_find_same_mcc_group(struct nbl_flow_switch_res *res, + unsigned long *vsi_bitmap, bool multi) +{ + struct nbl_flow_mcc_group *group = NULL; + + list_for_each_entry(group, &res->mcc_group_head, group_node) + if (group->multi == multi && + __bitmap_equal(group->vsi_bitmap, vsi_bitmap, + group->nbits)) { + group->ref_cnt++; + return group; + } + + return NULL; +} + +static void nbl_flow_macvlan_node_del_action_func(void *priv, void *x_key, + void *y_key, void *data) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_l2_data *rule_data = (struct nbl_flow_l2_data *)data; + int i; + + for (i = 0; i < NBL_FLOW_MACVLAN_MAX; i++) { + if (i == NBL_FLOW_UP_TNL && rule_data->multi) + continue; + nbl_flow_del_flow(res_mgt, &rule_data->entry[i]); + } + + /* delete mcc */ + if (rule_data->mcast_flow) + nbl_flow_free_mcc_group(res_mgt, rule_data->mcc_group); +} + +static u32 nbl_flow_get_reserve_macvlan_cnt(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + struct nbl_flow_switch_res *res; + int i; + u32 reserve_cnt = 0; + + for_each_set_bit(i, eth_info->eth_bitmap, NBL_MAX_ETHERNET) { + res = &flow_mgt->switch_res[i]; + if (res->num_vfs) + reserve_cnt += (res->num_vfs - res->active_vfs) * 3; + } + + return reserve_cnt; +} + +static int nbl_flow_macvlan_node_vsi_match_func(void *condition, void *x_key, + void *y_key, void *data) +{ + u16 vsi = *(u16 *)condition; + struct nbl_flow_l2_data *rule_data = (struct nbl_flow_l2_data *)data; + + if (!rule_data->mcast_flow) + return rule_data->vsi == vsi ? 0 : -1; + else + return !test_bit(vsi - rule_data->mcc_group->vsi_base, + rule_data->mcc_group->vsi_bitmap); +} + +static void nbl_flow_macvlan_node_found_vsi_action(void *priv, void *x_key, + void *y_key, void *data) +{ + bool *match = (bool *)(priv); + + *match = 1; +} + +static int nbl_flow_add_macvlan(void *priv, u8 *mac, u16 vlan, u16 vsi) +{ + struct nbl_hash_xy_tbl_scan_key scan_key; + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_switch_res *res; + struct nbl_flow_l2_data *rule_data; + struct nbl_flow_mcc_group *mcc_group = NULL, *pend_group = NULL; + unsigned long *vsi_bitmap; + struct nbl_flow_param param = { 0 }; + void *tbl; + int i; + int ret = 0; + int pf_id, vf_id; + u32 reserve_cnt; + u16 eth_id; + u16 vsi_base; + u16 vsi_num = 0; + u16 func_id; + bool alloc_rule = 0; + bool need_mcast = 0; + bool vsi_match = 0; + + eth_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi); + res = &flow_mgt->switch_res[eth_id]; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi); + nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pf_id, &vf_id); + reserve_cnt = nbl_flow_get_reserve_macvlan_cnt(res_mgt); + + if (flow_mgt->flow_id_cnt <= reserve_cnt && + (vf_id == U32_MAX || test_bit(vf_id, res->vf_bitmap))) + return -ENOSPC; + + vsi_bitmap = kcalloc(BITS_TO_LONGS(flow_mgt->vsi_max_per_switch), + sizeof(long), GFP_KERNEL); + if (!vsi_bitmap) + return -ENOMEM; + + NBL_HASH_XY_TBL_SCAN_KEY_INIT(&scan_key, + NBL_HASH_TBL_OP_SHOW, + NBL_HASH_TBL_ALL_SCAN, + false, NULL, NULL, &vsi, + &nbl_flow_macvlan_node_vsi_match_func, + &vsi_match, + &nbl_flow_macvlan_node_found_vsi_action); + + param.mac = mac; + param.vid = vlan; + param.eth = eth_id; + param.vsi = vsi; + param.mcc_id = NBL_MCC_ID_INVALID; + + vsi_base = eth_id * NBL_FLOW_LEONIS_VSI_NUM_PER_ETH; + tbl = res->mac_hash_tbl; + rule_data = + (struct nbl_flow_l2_data *)nbl_common_get_hash_xy_node(tbl, + mac, + &vlan); + if (rule_data) { + if (rule_data->mcast_flow && + test_bit(vsi - rule_data->mcc_group->vsi_base, + rule_data->mcc_group->vsi_bitmap)) + goto success; + else if (!rule_data->mcast_flow && rule_data->vsi == vsi) + goto success; + + if (!rule_data->mcast_flow) { + vsi_num = 1; + set_bit(rule_data->vsi - vsi_base, vsi_bitmap); + } else { + vsi_num = rule_data->mcc_group->vsi_num; + bitmap_copy(vsi_bitmap, + rule_data->mcc_group->vsi_bitmap, + flow_mgt->vsi_max_per_switch); + } + need_mcast = 1; + + } else { + rule_data = kzalloc(sizeof(*rule_data), GFP_KERNEL); + if (!rule_data) { + ret = -ENOMEM; + goto alloc_rule_failed; + } + alloc_rule = 1; + rule_data->multi = is_multicast_ether_addr(mac); + rule_data->mcast_flow = 0; + } + + if (rule_data->multi) + need_mcast = 1; + + if (need_mcast) { + set_bit(vsi - vsi_base, vsi_bitmap); + vsi_num++; + mcc_group = nbl_find_same_mcc_group(res, vsi_bitmap, + rule_data->multi); + if (!mcc_group) { + mcc_group = nbl_flow_alloc_mcc_group(res_mgt, + vsi_bitmap, eth_id, + rule_data->multi, + vsi_num); + if (!mcc_group) { + ret = -ENOMEM; + goto alloc_mcc_group_failed; + } + } + if (rule_data->mcast_flow) + pend_group = rule_data->mcc_group; + } else { + rule_data->vsi = vsi; + } + + if (!alloc_rule) { + for (i = 0; i < NBL_FLOW_MACVLAN_MAX; i++) { + if (i == NBL_FLOW_UP_TNL && rule_data->multi) + continue; + + nbl_flow_del_flow(res_mgt, &rule_data->entry[i]); + } + + if (pend_group) + nbl_flow_free_mcc_group(res_mgt, pend_group); + } + + for (i = 0; i < NBL_FLOW_MACVLAN_MAX; i++) { + if (i == NBL_FLOW_UP_TNL && rule_data->multi) + continue; + if (mcc_group) { + if (i <= NBL_FLOW_UP) + param.mcc_id = mcc_group->up_mcc_id; + else + param.mcc_id = mcc_group->down_mcc_id; + } + ret = nbl_flow_add_flow(res_mgt, param, i, + &rule_data->entry[i]); + if (ret) + goto add_flow_failed; + } + + if (mcc_group) { + rule_data->mcast_flow = 1; + rule_data->mcc_group = mcc_group; + } else { + rule_data->mcast_flow = 0; + rule_data->vsi = vsi; + } + + if (alloc_rule) { + ret = nbl_common_alloc_hash_xy_node(res->mac_hash_tbl, mac, + &vlan, rule_data); + if (ret) + goto add_flow_failed; + } + + if (alloc_rule) + kfree(rule_data); +success: + kfree(vsi_bitmap); + + if (vf_id != U32_MAX && !test_bit(vf_id, res->vf_bitmap)) { + set_bit(vf_id, res->vf_bitmap); + res->active_vfs++; + } + + return 0; + +add_flow_failed: + while (--i + 1) { + if (i == NBL_FLOW_UP_TNL && rule_data->multi) + continue; + nbl_flow_del_flow(res_mgt, &rule_data->entry[i]); + } + if (!alloc_rule) + nbl_common_free_hash_xy_node(res->mac_hash_tbl, mac, &vlan); + if (mcc_group) + nbl_flow_free_mcc_group(res_mgt, mcc_group); +alloc_mcc_group_failed: + if (alloc_rule) + kfree(rule_data); +alloc_rule_failed: + kfree(vsi_bitmap); + + nbl_common_scan_hash_xy_node(res->mac_hash_tbl, &scan_key); + if (vf_id != U32_MAX && test_bit(vf_id, res->vf_bitmap) && !vsi_match) { + clear_bit(vf_id, res->vf_bitmap); + res->active_vfs--; + } + + return ret; +} + +static void nbl_flow_del_macvlan(void *priv, u8 *mac, u16 vlan, u16 vsi) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_mcc_group *mcc_group = NULL, *pend_group = NULL; + unsigned long *vsi_bitmap; + struct nbl_flow_switch_res *res; + struct nbl_flow_l2_data *rule_data; + struct nbl_flow_param param = { 0 }; + struct nbl_hash_xy_tbl_scan_key scan_key; + int i; + int ret; + int pf_id, vf_id; + u32 vsi_num; + u16 vsi_base = 0; + u16 eth_id; + u16 func_id; + bool need_mcast = false; + bool add_flow = false; + bool vsi_match = 0; + + eth_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi); + res = &flow_mgt->switch_res[eth_id]; + + rule_data = nbl_common_get_hash_xy_node(res->mac_hash_tbl, mac, &vlan); + if (!rule_data) + return; + if (!rule_data->mcast_flow && rule_data->vsi != vsi) + return; + else if (rule_data->mcast_flow && + !test_bit(vsi - rule_data->mcc_group->vsi_base, + rule_data->mcc_group->vsi_bitmap)) + return; + + vsi_bitmap = kcalloc(BITS_TO_LONGS(flow_mgt->vsi_max_per_switch), + sizeof(long), GFP_KERNEL); + if (!vsi_bitmap) + return; + + func_id = nbl_res_vsi_id_to_func_id(res_mgt, vsi); + nbl_res_func_id_to_pfvfid(res_mgt, func_id, &pf_id, &vf_id); + NBL_HASH_XY_TBL_SCAN_KEY_INIT(&scan_key, NBL_HASH_TBL_OP_SHOW, + NBL_HASH_TBL_ALL_SCAN, false, NULL, NULL, + &vsi, + &nbl_flow_macvlan_node_vsi_match_func, + &vsi_match, + &nbl_flow_macvlan_node_found_vsi_action); + + if (rule_data->mcast_flow) { + bitmap_copy(vsi_bitmap, rule_data->mcc_group->vsi_bitmap, + flow_mgt->vsi_max_per_switch); + vsi_num = rule_data->mcc_group->vsi_num; + clear_bit(vsi - rule_data->mcc_group->vsi_base, vsi_bitmap); + vsi_num--; + vsi_base = (u16)rule_data->mcc_group->vsi_base; + + if (rule_data->mcc_group->vsi_num > 1) + add_flow = true; + + if ((rule_data->multi && rule_data->mcc_group->vsi_num > 1) || + (!rule_data->multi && rule_data->mcc_group->vsi_num > 2)) + need_mcast = 1; + pend_group = rule_data->mcc_group; + } + + if (need_mcast) { + mcc_group = nbl_find_same_mcc_group(res, vsi_bitmap, + rule_data->multi); + if (!mcc_group) { + mcc_group = nbl_flow_alloc_mcc_group(res_mgt, + vsi_bitmap, eth_id, + rule_data->multi, + vsi_num); + if (!mcc_group) + goto alloc_mcc_group_failed; + } + } + + for (i = 0; i < NBL_FLOW_MACVLAN_MAX; i++) { + if (i == NBL_FLOW_UP_TNL && rule_data->multi) + continue; + + nbl_flow_del_flow(res_mgt, &rule_data->entry[i]); + } + + if (pend_group) + nbl_flow_free_mcc_group(res_mgt, pend_group); + + if (add_flow) { + param.mac = mac; + param.vid = vlan; + param.eth = eth_id; + param.mcc_id = NBL_MCC_ID_INVALID; + param.vsi = (u16)find_first_bit(vsi_bitmap, + flow_mgt->vsi_max_per_switch) + + vsi_base; + + for (i = 0; i < NBL_FLOW_MACVLAN_MAX; i++) { + if (i == NBL_FLOW_UP_TNL && rule_data->multi) + continue; + if (mcc_group) { + if (i <= NBL_FLOW_UP) + param.mcc_id = mcc_group->up_mcc_id; + else + param.mcc_id = mcc_group->down_mcc_id; + } + ret = nbl_flow_add_flow(res_mgt, param, i, + &rule_data->entry[i]); + if (ret) + goto add_flow_failed; + } + + if (mcc_group) { + rule_data->mcast_flow = 1; + rule_data->mcc_group = mcc_group; + } else { + rule_data->mcast_flow = 0; + rule_data->vsi = param.vsi; + } + } + + if (!add_flow) + nbl_common_free_hash_xy_node(res->mac_hash_tbl, mac, &vlan); + +alloc_mcc_group_failed: + kfree(vsi_bitmap); + + nbl_common_scan_hash_xy_node(res->mac_hash_tbl, &scan_key); + if (vf_id != U32_MAX && test_bit(vf_id, res->vf_bitmap) && !vsi_match) { + clear_bit(vf_id, res->vf_bitmap); + res->active_vfs--; + } + + return; + +add_flow_failed: + while (--i + 1) { + if (i == NBL_FLOW_UP_TNL && rule_data->multi) + continue; + nbl_flow_del_flow(res_mgt, &rule_data->entry[i]); + } + if (mcc_group) + nbl_flow_free_mcc_group(res_mgt, mcc_group); + nbl_common_free_hash_xy_node(res->mac_hash_tbl, mac, &vlan); + kfree(vsi_bitmap); + nbl_common_scan_hash_xy_node(res->mac_hash_tbl, &scan_key); + if (vf_id != U32_MAX && test_bit(vf_id, res->vf_bitmap) && !vsi_match) { + clear_bit(vf_id, res->vf_bitmap); + res->active_vfs--; + } +} + +static int nbl_flow_add_lldp(void *priv, u16 vsi) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_flow_lldp_rule *rule; + struct nbl_flow_param param = { 0 }; + + list_for_each_entry(rule, &flow_mgt->lldp_list, node) + if (rule->vsi == vsi) + return 0; + + rule = kzalloc(sizeof(*rule), GFP_KERNEL); + if (!rule) + return -ENOMEM; + + param.eth = nbl_res_vsi_id_to_eth_id(res_mgt, vsi); + param.vsi = vsi; + param.ether_type = ETH_P_LLDP; + + if (nbl_flow_add_flow(res_mgt, param, NBL_FLOW_LLDP_LACP_UP, + &rule->entry)) { + nbl_err(common, "Fail to add lldp flow for vsi %d", vsi); + kfree(rule); + return -EFAULT; + } + + rule->vsi = vsi; + list_add_tail(&rule->node, &flow_mgt->lldp_list); + + return 0; +} + +static void nbl_flow_del_lldp(void *priv, u16 vsi) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_mgt *flow_mgt; + struct nbl_flow_lldp_rule *rule; + + flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + + list_for_each_entry(rule, &flow_mgt->lldp_list, node) + if (rule->vsi == vsi) + break; + + if (list_entry_is_head(rule, &flow_mgt->lldp_list, node)) + return; + + nbl_flow_del_flow(res_mgt, &rule->entry); + + list_del(&rule->node); + kfree(rule); +} + +static int nbl_flow_change_mcc_group_chain(struct nbl_resource_mgt *res_mgt, + u8 eth, u16 current_mcc_id) +{ + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_switch_res *switch_res = &flow_mgt->switch_res[eth]; + struct nbl_flow_mcc_group *group; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + void *p = NBL_RES_MGT_TO_HW_PRIV(res_mgt); + u16 node_mcc; + + list_for_each_entry(group, &switch_res->mcc_group_head, group_node) + if (group->multi && !list_empty(&group->mcc_node)) { + node_mcc = list_last_entry(&group->mcc_node, + struct nbl_flow_mcc_node, + node) + ->mcc_id; + hw_ops->update_mcc_next_node(p, node_mcc, + current_mcc_id); + } + switch_res->allmulti_first_mcc = current_mcc_id; + return 0; +} + +static int nbl_flow_add_multi_mcast(void *priv, u16 vsi) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_switch_res *switch_res; + struct nbl_flow_mcc_node *node; + int ret; + u16 current_mcc_id; + u8 eth = nbl_res_vsi_id_to_eth_id(res_mgt, vsi); + + switch_res = &flow_mgt->switch_res[eth]; + list_for_each_entry(node, &switch_res->allmulti_list, node) + if (node->data == vsi && node->type == NBL_MCC_INDEX_VSI) + return 0; + + node = nbl_flow_alloc_mcc_node(flow_mgt, NBL_MCC_INDEX_VSI, vsi, 0); + if (!node) + return -ENOSPC; + + switch_res = &flow_mgt->switch_res[eth]; + ret = nbl_flow_add_mcc_node(res_mgt, node, &switch_res->allmulti_head, + &switch_res->allmulti_list, NULL); + if (ret) { + nbl_flow_free_mcc_node(flow_mgt, node); + return ret; + } + + if (list_empty(&switch_res->allmulti_list)) + current_mcc_id = NBL_MCC_ID_INVALID; + else + current_mcc_id = list_first_entry(&switch_res->allmulti_list, + struct nbl_flow_mcc_node, + node) + ->mcc_id; + + if (current_mcc_id != switch_res->allmulti_first_mcc) + nbl_flow_change_mcc_group_chain(res_mgt, eth, current_mcc_id); + + return 0; +} + +static void nbl_flow_del_multi_mcast(void *priv, u16 vsi) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_switch_res *switch_res; + struct nbl_flow_mcc_node *mcc_node; + u16 current_mcc_id; + u8 eth = nbl_res_vsi_id_to_eth_id(res_mgt, vsi); + + switch_res = &flow_mgt->switch_res[eth]; + list_for_each_entry(mcc_node, &switch_res->allmulti_list, node) + if (mcc_node->data == vsi && + mcc_node->type == NBL_MCC_INDEX_VSI) { + nbl_flow_del_mcc_node(res_mgt, mcc_node, + &switch_res->allmulti_head, + &switch_res->allmulti_list, NULL); + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + break; + } + + if (list_empty(&switch_res->allmulti_list)) + current_mcc_id = NBL_MCC_ID_INVALID; + else + current_mcc_id = list_first_entry(&switch_res->allmulti_list, + struct nbl_flow_mcc_node, + node) + ->mcc_id; + + if (current_mcc_id != switch_res->allmulti_first_mcc) + nbl_flow_change_mcc_group_chain(res_mgt, eth, current_mcc_id); +} + +static int nbl_flow_add_multi_group(struct nbl_resource_mgt *res_mgt, u8 eth) +{ + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_switch_res *switch_res = &flow_mgt->switch_res[eth]; + struct nbl_flow_param param_up = {0}; + struct nbl_flow_mcc_node *up_node; + struct nbl_flow_param param_down = {0}; + struct nbl_flow_mcc_node *down_node; + int i, ret; + + down_node = + nbl_flow_alloc_mcc_node(flow_mgt, NBL_MCC_INDEX_ETH, eth, 1); + if (!down_node) + return -ENOSPC; + + ret = nbl_flow_add_mcc_node(res_mgt, down_node, + &switch_res->allmulti_head, + &switch_res->allmulti_list, NULL); + if (ret) + goto add_eth_mcc_node_failed; + + param_down.mcc_id = down_node->mcc_id; + param_down.eth = eth; + for (i = 0; + i < NBL_FLOW_DOWN_MULTI_MCAST_END - NBL_FLOW_L2_DOWN_MULTI_MCAST; + i++) { + ret = nbl_flow_add_flow(res_mgt, param_down, + i + NBL_FLOW_L2_DOWN_MULTI_MCAST, + &switch_res->allmulti_down[i]); + if (ret) + goto add_down_flow_failed; + } + + up_node = nbl_flow_alloc_mcc_node(flow_mgt, NBL_MCC_INDEX_BMC, + NBL_FLOW_MCC_BMC_DPORT, 1); + if (!up_node) { + ret = -ENOSPC; + goto alloc_bmc_node_failed; + } + + ret = nbl_flow_add_mcc_node(res_mgt, up_node, + &switch_res->allmulti_head, + &switch_res->allmulti_list, NULL); + if (ret) + goto add_bmc_mcc_node_failed; + + param_up.mcc_id = up_node->mcc_id; + param_up.eth = eth; + for (i = 0; + i < NBL_FLOW_UP_MULTI_MCAST_END - NBL_FLOW_L2_UP_MULTI_MCAST; + i++) { + ret = nbl_flow_add_flow(res_mgt, param_up, + i + NBL_FLOW_L2_UP_MULTI_MCAST, + &switch_res->allmulti_up[i]); + if (ret) + goto add_up_flow_failed; + } + + switch_res->ether_id = eth; + switch_res->allmulti_first_mcc = NBL_MCC_ID_INVALID; + switch_res->vld = 1; + + return 0; + +add_up_flow_failed: + while (--i >= 0) + nbl_flow_del_flow(res_mgt, &switch_res->allmulti_up[i]); + nbl_flow_del_mcc_node(res_mgt, up_node, &switch_res->allmulti_head, + &switch_res->allmulti_list, NULL); +add_bmc_mcc_node_failed: + nbl_flow_free_mcc_node(flow_mgt, up_node); +alloc_bmc_node_failed: +add_down_flow_failed: + while (--i >= 0) + nbl_flow_del_flow(res_mgt, &switch_res->allmulti_down[i]); + nbl_flow_del_mcc_node(res_mgt, down_node, &switch_res->allmulti_head, + &switch_res->allmulti_list, NULL); +add_eth_mcc_node_failed: + nbl_flow_free_mcc_node(flow_mgt, down_node); + return ret; +} + +static void nbl_flow_del_multi_group(struct nbl_resource_mgt *res_mgt, u8 eth) +{ + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + struct nbl_flow_switch_res *switch_res = &flow_mgt->switch_res[eth]; + struct nbl_flow_mcc_node *mcc_node, *mcc_node_safe; + + if (!switch_res->vld) + return; + + nbl_flow_del_flow(res_mgt, &switch_res->allmulti_up[0]); + nbl_flow_del_flow(res_mgt, &switch_res->allmulti_up[1]); + nbl_flow_del_flow(res_mgt, &switch_res->allmulti_down[0]); + nbl_flow_del_flow(res_mgt, &switch_res->allmulti_down[1]); + + list_for_each_entry_safe(mcc_node, mcc_node_safe, + &switch_res->allmulti_list, node) { + nbl_flow_del_mcc_node(res_mgt, mcc_node, + &switch_res->allmulti_head, + &switch_res->allmulti_list, NULL); + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + } + + list_for_each_entry_safe(mcc_node, mcc_node_safe, + &switch_res->allmulti_head, node) { + nbl_flow_del_mcc_node(res_mgt, mcc_node, + &switch_res->allmulti_head, + &switch_res->allmulti_list, NULL); + nbl_flow_free_mcc_node(flow_mgt, mcc_node); + } + + INIT_LIST_HEAD(&switch_res->allmulti_list); + INIT_LIST_HEAD(&switch_res->allmulti_head); + switch_res->vld = 0; + switch_res->allmulti_first_mcc = NBL_MCC_ID_INVALID; +} + +static void nbl_flow_remove_multi_group(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + int i; + + for_each_set_bit(i, eth_info->eth_bitmap, NBL_MAX_ETHERNET) + nbl_flow_del_multi_group(res_mgt, i); +} + +static int nbl_flow_setup_multi_group(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_eth_info *eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + int i, ret = 0; + + for_each_set_bit(i, eth_info->eth_bitmap, NBL_MAX_ETHERNET) { + ret = nbl_flow_add_multi_group(res_mgt, i); + if (ret) + goto fail; + } + + return 0; + +fail: + nbl_flow_remove_multi_group(res_mgt); + return ret; +} + +static u16 nbl_vsi_mtu_index(struct nbl_resource_mgt *res_mgt, u16 vsi_id) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u16 index; + + index = hw_ops->get_mtu_index(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id); + return index - 1; +} + +static void nbl_clear_mtu_entry(struct nbl_resource_mgt *res_mgt, u16 vsi_id) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + u16 mtu_index; + + mtu_index = nbl_vsi_mtu_index(res_mgt, vsi_id); + if (mtu_index < NBL_MAX_MTU_NUM) { + res_mgt->resource_info->mtu_list[mtu_index].ref_count--; + hw_ops->set_vsi_mtu(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id, 0); + if (res_mgt->resource_info->mtu_list[mtu_index].ref_count == + 0) { + hw_ops->set_mtu(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + mtu_index + 1, 0); + res_mgt->resource_info->mtu_list[mtu_index].mtu_value = + 0; + } + } +} + +static void nbl_flow_clear_flow(void *priv, u16 vsi_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + void *mac_hash_tbl; + struct nbl_hash_xy_tbl_scan_key scan_key; + u8 eth_id; + + eth_id = nbl_res_vsi_id_to_eth_id(res_mgt, vsi_id); + mac_hash_tbl = flow_mgt->switch_res[eth_id].mac_hash_tbl; + + nbl_clear_mtu_entry(res_mgt, vsi_id); + NBL_HASH_XY_TBL_SCAN_KEY_INIT(&scan_key, NBL_HASH_TBL_OP_DELETE, + NBL_HASH_TBL_ALL_SCAN, false, NULL, NULL, + &vsi_id, + &nbl_flow_macvlan_node_vsi_match_func, + res_mgt, + &nbl_flow_macvlan_node_del_action_func); + nbl_common_scan_hash_xy_node(mac_hash_tbl, &scan_key); + nbl_flow_del_multi_mcast(res_mgt, vsi_id); +} + +static void nbl_res_flr_clear_flow(void *priv, u16 vf_id) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + u16 func_id = vf_id + NBL_MAX_PF; + u16 vsi_id = nbl_res_func_id_to_vsi_id(res_mgt, func_id, + NBL_VSI_SERV_VF_DATA_TYPE); + + if (nbl_res_vf_is_active(priv, func_id)) + nbl_flow_clear_flow(priv, vsi_id); +} + +static int nbl_res_flow_check_flow_table_spec(void *priv, u16 vlan_cnt, + u16 unicast_cnt, + u16 multicast_cnt) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_flow_mgt *flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + u32 reserve_cnt = nbl_flow_get_reserve_macvlan_cnt(res_mgt); + u32 need = vlan_cnt * (3 * unicast_cnt + 2 * multicast_cnt); + + if (reserve_cnt + need > flow_mgt->flow_id_cnt) + return -ENOSPC; + + return 0; +} + +static int nbl_res_set_mtu(void *priv, u16 vsi_id, u16 mtu) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_mtu_entry *mtu_list = &res_mgt->resource_info->mtu_list[0]; + int i, found_idx = -1, first_zero_idx = -1; + u16 real_mtu = mtu + ETH_HLEN + 2 * VLAN_HLEN; + + nbl_clear_mtu_entry(res_mgt, vsi_id); + if (mtu == 0) + return 0; + + for (i = 0; i < NBL_MAX_MTU_NUM; i++) { + if (mtu_list[i].mtu_value == real_mtu) { + found_idx = i; + break; + } + + if (!mtu_list[i].mtu_value) + first_zero_idx = i; + } + + if (first_zero_idx == -1 && found_idx == -1) + return 0; + + if (found_idx != -1) { + mtu_list[found_idx].ref_count++; + hw_ops->set_vsi_mtu(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id, + found_idx + 1); + return 0; + } + + if (first_zero_idx != -1) { + mtu_list[first_zero_idx].ref_count++; + mtu_list[first_zero_idx].mtu_value = real_mtu; + hw_ops->set_vsi_mtu(NBL_RES_MGT_TO_HW_PRIV(res_mgt), vsi_id, + first_zero_idx + 1); + hw_ops->set_mtu(NBL_RES_MGT_TO_HW_PRIV(res_mgt), + first_zero_idx + 1, real_mtu); + } + + return 0; +} + +/* NBL_FLOW_SET_OPS(ops_name, func) + * + * Use X Macros to reduce setup and remove codes. + */ +#define NBL_FLOW_OPS_TBL \ +do { \ + NBL_FLOW_SET_OPS(add_macvlan, nbl_flow_add_macvlan); \ + NBL_FLOW_SET_OPS(del_macvlan, nbl_flow_del_macvlan); \ + NBL_FLOW_SET_OPS(add_lldp_flow, nbl_flow_add_lldp); \ + NBL_FLOW_SET_OPS(del_lldp_flow, nbl_flow_del_lldp); \ + NBL_FLOW_SET_OPS(add_multi_mcast, nbl_flow_add_multi_mcast); \ + NBL_FLOW_SET_OPS(del_multi_mcast, nbl_flow_del_multi_mcast); \ + NBL_FLOW_SET_OPS(setup_multi_group, nbl_flow_setup_multi_group); \ + NBL_FLOW_SET_OPS(remove_multi_group, nbl_flow_remove_multi_group); \ + NBL_FLOW_SET_OPS(clear_flow, nbl_flow_clear_flow); \ + NBL_FLOW_SET_OPS(flr_clear_flows, nbl_res_flr_clear_flow); \ + NBL_FLOW_SET_OPS(set_mtu, nbl_res_set_mtu); \ + NBL_FLOW_SET_OPS(check_flow_table_spec, \ + nbl_res_flow_check_flow_table_spec); \ +} while (0) + +static void nbl_flow_remove_mgt(struct device *dev, + struct nbl_resource_mgt *res_mgt) +{ + struct nbl_flow_mgt *fl_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + int i; + struct nbl_hash_xy_tbl_del_key del_key; + + NBL_HASH_XY_TBL_DEL_KEY_INIT(&del_key, res_mgt, + &nbl_flow_macvlan_node_del_action_func); + for (i = 0; i < NBL_MAX_ETHERNET; i++) { + nbl_common_rm_hash_xy_table(fl_mgt->switch_res[i].mac_hash_tbl, + &del_key); + if (fl_mgt->switch_res[i].vf_bitmap) + devm_kfree(dev, fl_mgt->switch_res[i].vf_bitmap); + } + + if (fl_mgt->flow_id_bitmap) + devm_kfree(dev, fl_mgt->flow_id_bitmap); + if (fl_mgt->mcc_id_bitmap) + devm_kfree(dev, fl_mgt->mcc_id_bitmap); + fl_mgt->flow_id_cnt = 0; + devm_kfree(dev, fl_mgt); + NBL_RES_MGT_TO_FLOW_MGT(res_mgt) = NULL; +} + +static int nbl_flow_setup_mgt(struct device *dev, + struct nbl_resource_mgt *res_mgt) +{ + struct nbl_hash_xy_tbl_key macvlan_tbl_key; + struct nbl_flow_mgt *flow_mgt; + struct nbl_eth_info *eth_info; + int i; + int vf_num = -1; + u16 pf_id; + + flow_mgt = devm_kzalloc(dev, sizeof(struct nbl_flow_mgt), GFP_KERNEL); + if (!flow_mgt) + return -ENOMEM; + + NBL_RES_MGT_TO_FLOW_MGT(res_mgt) = flow_mgt; + eth_info = NBL_RES_MGT_TO_ETH_INFO(res_mgt); + + flow_mgt->flow_id_bitmap = + devm_kcalloc(dev, BITS_TO_LONGS(NBL_MACVLAN_TABLE_LEN), + sizeof(long), GFP_KERNEL); + if (!flow_mgt->flow_id_bitmap) + goto setup_mgt_failed; + flow_mgt->flow_id_cnt = NBL_MACVLAN_TABLE_LEN; + + flow_mgt->mcc_id_bitmap = + devm_kcalloc(dev, BITS_TO_LONGS(NBL_FLOW_MCC_INDEX_SIZE), + sizeof(long), GFP_KERNEL); + if (!flow_mgt->mcc_id_bitmap) + goto setup_mgt_failed; + + NBL_HASH_XY_TBL_KEY_INIT(&macvlan_tbl_key, dev, ETH_ALEN, sizeof(u16), + sizeof(struct nbl_flow_l2_data), + NBL_MACVLAN_TBL_BUCKET_SIZE, + NBL_MACVLAN_X_AXIS_BUCKET_SIZE, + NBL_MACVLAN_Y_AXIS_BUCKET_SIZE, false); + for (i = 0; i < NBL_MAX_ETHERNET; i++) { + INIT_LIST_HEAD(&flow_mgt->switch_res[i].allmulti_head); + INIT_LIST_HEAD(&flow_mgt->switch_res[i].allmulti_list); + INIT_LIST_HEAD(&flow_mgt->switch_res[i].mcc_group_head); + + flow_mgt->switch_res[i].mac_hash_tbl = + nbl_common_init_hash_xy_table(&macvlan_tbl_key); + if (!flow_mgt->switch_res[i].mac_hash_tbl) + goto setup_mgt_failed; + pf_id = find_first_bit((unsigned long *)ð_info->pf_bitmap[i], + 8); + if (pf_id != 8) + vf_num = nbl_res_get_pf_vf_num(res_mgt, pf_id); + + if (vf_num != -1) { + flow_mgt->switch_res[i].num_vfs = vf_num; + flow_mgt->switch_res[i].vf_bitmap = + devm_kcalloc(dev, BITS_TO_LONGS(vf_num), + sizeof(long), GFP_KERNEL); + if (!flow_mgt->switch_res[i].vf_bitmap) + goto setup_mgt_failed; + } else { + flow_mgt->switch_res[i].num_vfs = 0; + flow_mgt->switch_res[i].vf_bitmap = NULL; + } + flow_mgt->switch_res[i].active_vfs = 0; + } + + INIT_LIST_HEAD(&flow_mgt->lldp_list); + INIT_LIST_HEAD(&flow_mgt->lacp_list); + INIT_LIST_HEAD(&flow_mgt->ul4s_head); + INIT_LIST_HEAD(&flow_mgt->dprbac_head); + + flow_mgt->vsi_max_per_switch = NBL_VSI_MAX_ID / eth_info->eth_num; + + return 0; + +setup_mgt_failed: + nbl_flow_remove_mgt(dev, res_mgt); + return -1; +} + +int nbl_flow_mgt_start_leonis(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_hw_ops *hw_ops; + struct device *dev; + int ret = 0; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + + ret = nbl_flow_setup_mgt(dev, res_mgt); + if (ret) + goto setup_mgt_fail; + + ret = hw_ops->init_fem(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + if (ret) + goto init_fem_fail; + + return 0; + +init_fem_fail: + nbl_flow_remove_mgt(dev, res_mgt); +setup_mgt_fail: + return -1; +} + +void nbl_flow_mgt_stop_leonis(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_flow_mgt *flow_mgt; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + flow_mgt = NBL_RES_MGT_TO_FLOW_MGT(res_mgt); + if (!flow_mgt) + return; + + nbl_flow_remove_mgt(dev, res_mgt); +} + +int nbl_flow_setup_ops_leonis(struct nbl_resource_ops *res_ops) +{ +#define NBL_FLOW_SET_OPS(name, func) \ + do { \ + res_ops->NBL_NAME(name) = func; \ + ; \ + } while (0) + NBL_FLOW_OPS_TBL; +#undef NBL_FLOW_SET_OPS + + return 0; +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.h new file mode 100644 index 000000000000..b513eb2afd87 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_flow_leonis.h @@ -0,0 +1,204 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +#ifndef _NBL_FLOW_LEONIS_H_ +#define _NBL_FLOW_LEONIS_H_ + +#include "nbl_core.h" +#include "nbl_hw.h" +#include "nbl_resource.h" + +#define NBL_EM_HW_KT_OFFSET (0x1E000) + +#define NBL_TOTAL_MACVLAN_NUM 4096 +#define NBL_MAX_ACTION_NUM 16 + +#define NBL_FLOW_MCC_PXE_SIZE 8 +#define NBL_FLOW_MCC_INDEX_SIZE (4096 - NBL_FLOW_MCC_PXE_SIZE) +#define NBL_FLOW_MCC_INDEX_START (4 * 1024) +#define NBL_FLOW_MCC_BMC_DPORT 0x30D + +#define NBL_MACVLAN_TBL_BUCKET_SIZE 64 +#define NBL_MACVLAN_X_AXIS_BUCKET_SIZE 64 +#define NBL_MACVLAN_Y_AXIS_BUCKET_SIZE 16 + +#define NBL_PP0_POWER 11 + +enum nbl_flow_mcc_index_type { + NBL_MCC_INDEX_ETH, + NBL_MCC_INDEX_VSI, + NBL_MCC_INDEX_BOND, + NBL_MCC_INDEX_BMC, +}; + +#pragma pack(1) + +#define NBL_DUPPKT_PTYPE_NA 135 +#define NBL_DUPPKT_PTYPE_NS 136 + +struct nbl_flow_l2_data { + struct nbl_flow_fem_entry entry[NBL_FLOW_MACVLAN_MAX]; + union { + struct nbl_flow_mcc_group *mcc_group; + u16 vsi; + }; + bool multi; + bool mcast_flow; +}; + +union nbl_l2_hw_up_data_u { + struct nbl_l2_hw_up_data { + u32 act0:22; + u32 act1:22; + u64 rsv1:40; + u32 padding:4; + u32 sport:4; + u32 svlan_id:16; + u64 dst_mac:48; + u32 template:4; + u32 rsv[5]; + } __packed info; +#define NBL_L2_HW_UP_DATA_TAB_WIDTH \ + (sizeof(struct nbl_l2_hw_up_data) / sizeof(u32)) + u32 data[NBL_L2_HW_UP_DATA_TAB_WIDTH]; + u8 hash_key[sizeof(struct nbl_l2_hw_up_data)]; +}; + +union nbl_l2_hw_lldp_lacp_data_u { + struct nbl_l2_hw_lldp_lacp_data { + u32 act0:22; + u32 rsv1:2; + u8 padding[14]; + u32 sport:4; + u32 ether_type:16; + u32 template:4; + u32 rsv[5]; + } __packed info; +#define NBL_L2_HW_LLDP_LACP_DATA_TAB_WIDTH \ + (sizeof(struct nbl_l2_hw_lldp_lacp_data) / sizeof(u32)) + u32 data[NBL_L2_HW_LLDP_LACP_DATA_TAB_WIDTH]; + u8 hash_key[sizeof(struct nbl_l2_hw_lldp_lacp_data)]; +}; + +union nbl_l2_hw_up_multi_mcast_data_u { + struct nbl_l2_hw_up_multi_mcast_data { + u32 act0:22; + u32 rsv1:2; + u8 padding[16]; + u32 sport:4; + u32 template:4; + u32 rsv[5]; + } __packed info; +#define NBL_L2_HW_UP_MULTI_MCAST_DATA_TAB_WIDTH \ + (sizeof(struct nbl_l2_hw_up_multi_mcast_data) / sizeof(u32)) + u32 data[NBL_L2_HW_UP_MULTI_MCAST_DATA_TAB_WIDTH]; + u8 hash_key[sizeof(struct nbl_l2_hw_up_multi_mcast_data)]; +}; + +union nbl_l2_hw_down_multi_mcast_data_u { + struct nbl_l2_hw_down_multi_mcast_data { + u32 act0:22; + u32 rsv1:2; + u8 rsv2[16]; + u32 padding:2; + u32 sport:2; + u32 template:4; + u32 rsv[5]; + } __packed info; +#define NBL_L2_HW_DOWN_MULTI_MCAST_DATA_TAB_WIDTH \ + (sizeof(struct nbl_l2_hw_down_multi_mcast_data) / sizeof(u32)) + u32 data[NBL_L2_HW_DOWN_MULTI_MCAST_DATA_TAB_WIDTH]; + u8 hash_key[sizeof(struct nbl_l2_hw_down_multi_mcast_data)]; +}; + +union nbl_l2_hw_down_data_u { + struct nbl_l2_hw_down_data { + u32 act0:22; + u32 act1:22; + u64 rsv2:40; + u32 padding:6; + u32 sport:2; + u32 svlan_id:16; + u64 dst_mac:48; + u32 template:4; + u32 rsv[5]; + } __packed info; +#define NBL_L2_HW_DOWN_DATA_TAB_WIDTH \ + (sizeof(struct nbl_l2_hw_down_data) / sizeof(u32)) + u32 data[NBL_L2_HW_DOWN_DATA_TAB_WIDTH]; + u8 hash_key[sizeof(struct nbl_l2_hw_down_data)]; +}; + +union nbl_common_data_u { + struct nbl_common_data { + u32 rsv[10]; + } __packed info; +#define NBL_COMMON_DATA_TAB_WIDTH (sizeof(struct nbl_common_data) / sizeof(u32)) + u32 data[NBL_COMMON_DATA_TAB_WIDTH]; + u8 hash_key[sizeof(struct nbl_common_data)]; +}; + +#pragma pack() + +struct nbl_flow_param { + u8 *mac; + u8 type; + u8 eth; + u16 ether_type; + u16 vid; + u16 vsi; + u16 mcc_id; + u32 index; + u32 *data; + u32 priv_data; + bool for_pmd; +}; + +struct nbl_mt_input { + u8 key[NBL_KT_BYTE_LEN]; + u8 at_num; + u8 kt_left_num; + u32 tbl_id; + u16 depth; + u16 power; +}; + +struct nbl_ht_item { + u16 ht0_hash; + u16 ht1_hash; + u16 hash_bucket; + u32 key_index; + u8 ht_table; +}; + +struct nbl_kt_item { + union nbl_common_data_u kt_data; +}; + +struct nbl_tcam_item { + struct nbl_ht_item ht_item; + struct nbl_kt_item kt_item; + u32 tcam_action[NBL_MAX_ACTION_NUM]; + bool tcam_flag; + u8 key_mode; + u8 pp_type; + u32 *pp_tcam_count; + u16 tcam_index; +}; + +struct nbl_tcam_ad_item { + u32 action[NBL_MAX_ACTION_NUM]; +}; + +struct nbl_flow_rule_cfg_ops { + int (*cfg_action)(struct nbl_flow_param param, u32 *action0, + u32 *action1); + int (*cfg_key)(union nbl_common_data_u *data, + struct nbl_flow_param param, u8 eth_mode); + void (*cfg_kt_action)(union nbl_common_data_u *data, u32 action0, + u32 action1); +}; + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c index 4ee35f46c785..0b15d6365513 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c @@ -25,6 +25,467 @@ static u32 nbl_hw_get_quirks(void *priv) return quirks; } +static int nbl_send_kt_data(struct nbl_hw_mgt *hw_mgt, + union nbl_fem_kt_acc_ctrl_u *kt_ctrl, u8 *data, + struct nbl_common_info *common) +{ + union nbl_fem_kt_acc_ack_u kt_ack = { .info = { 0 } }; + u32 times = 3; + + nbl_hw_wr_regs(hw_mgt, NBL_FEM_KT_ACC_DATA, data, NBL_KT_HW_L2_DW_LEN); + nbl_debug(common, "Set kt = %08x-%08x-%08x-%08x-%08x", ((u32 *)data)[0], + ((u32 *)data)[1], ((u32 *)data)[2], ((u32 *)data)[3], + ((u32 *)data)[4]); + + kt_ctrl->info.rw = NBL_ACC_MODE_WRITE; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_KT_ACC_CTRL, kt_ctrl->data, + NBL_FEM_KT_ACC_CTRL_TBL_WIDTH); + + times = 3; + do { + nbl_hw_rd_regs(hw_mgt, NBL_FEM_KT_ACC_ACK, kt_ack.data, + NBL_FEM_KT_ACC_ACK_TBL_WIDTH); + if (!kt_ack.info.done) { + times--; + usleep_range(100, 200); + } else { + break; + } + } while (times); + + if (!times) { + nbl_err(common, "Config kt flowtale failed"); + return -EIO; + } + + return 0; +} + +static int nbl_send_ht_data(struct nbl_hw_mgt *hw_mgt, + union nbl_fem_ht_acc_ctrl_u *ht_ctrl, u8 *data, + struct nbl_common_info *common) +{ + union nbl_fem_ht_acc_ack_u ht_ack = { .info = { 0 } }; + u32 times = 3; + + nbl_hw_wr_regs(hw_mgt, NBL_FEM_HT_ACC_DATA, data, + NBL_FEM_HT_ACC_DATA_TBL_WIDTH); + nbl_debug(common, "Set ht data = %x", *(u32 *)data); + + ht_ctrl->info.rw = NBL_ACC_MODE_WRITE; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_HT_ACC_CTRL, ht_ctrl->data, + NBL_FEM_HT_ACC_CTRL_TBL_WIDTH); + + times = 3; + do { + nbl_hw_rd_regs(hw_mgt, NBL_FEM_HT_ACC_ACK, ht_ack.data, + NBL_FEM_HT_ACC_ACK_TBL_WIDTH); + if (!ht_ack.info.done) { + times--; + usleep_range(100, 200); + } else { + break; + } + } while (times); + + if (!times) { + nbl_err(common, "Config ht flowtale failed"); + return -EIO; + } + + return 0; +} + +static void nbl_check_kt_data(struct nbl_hw_mgt *hw_mgt, + union nbl_fem_kt_acc_ctrl_u *kt_ctrl, + struct nbl_common_info *common) +{ + union nbl_fem_kt_acc_ack_u ack = { .info = { 0 } }; + u32 data[10] = { 0 }; + + kt_ctrl->info.rw = NBL_ACC_MODE_READ; + kt_ctrl->info.access_size = NBL_ACC_SIZE_320B; + + nbl_hw_wr_regs(hw_mgt, NBL_FEM_KT_ACC_CTRL, kt_ctrl->data, + NBL_FEM_KT_ACC_CTRL_TBL_WIDTH); + + nbl_hw_rd_regs(hw_mgt, NBL_FEM_KT_ACC_ACK, ack.data, + NBL_FEM_KT_ACC_ACK_TBL_WIDTH); + nbl_debug(common, "Check kt done:%u status:%u.", ack.info.done, + ack.info.status); + if (ack.info.done) { + nbl_hw_rd_regs(hw_mgt, NBL_FEM_KT_ACC_DATA, (u8 *)data, + NBL_KT_HW_L2_DW_LEN); + nbl_debug(common, + "Check kt data:0x%x-%x-%x-%x-%x-%x-%x-%x-%x-%x.", + data[9], data[8], data[7], data[6], data[5], data[4], + data[3], data[2], data[1], data[0]); + } +} + +static void nbl_check_ht_data(struct nbl_hw_mgt *hw_mgt, + union nbl_fem_ht_acc_ctrl_u *ht_ctrl, + struct nbl_common_info *common) +{ + union nbl_fem_ht_acc_ack_u ack = { .info = { 0 } }; + u32 data[4] = { 0 }; + + ht_ctrl->info.rw = NBL_ACC_MODE_READ; + ht_ctrl->info.access_size = NBL_ACC_SIZE_128B; + + nbl_hw_wr_regs(hw_mgt, NBL_FEM_HT_ACC_CTRL, ht_ctrl->data, + NBL_FEM_HT_ACC_CTRL_TBL_WIDTH); + + nbl_hw_rd_regs(hw_mgt, NBL_FEM_HT_ACC_ACK, ack.data, + NBL_FEM_HT_ACC_ACK_TBL_WIDTH); + nbl_debug(common, "Check ht done:%u status:%u.", ack.info.done, + ack.info.status); + if (ack.info.done) { + nbl_hw_rd_regs(hw_mgt, NBL_FEM_HT_ACC_DATA, (u8 *)data, + NBL_FEM_HT_ACC_DATA_TBL_WIDTH); + nbl_debug(common, "Check ht data:0x%x-%x-%x-%x.", data[0], + data[1], data[2], data[3]); + } +} + +static void nbl_hw_fem_set_bank(struct nbl_hw_mgt *hw_mgt) +{ + u32 bank_sel = 0; + + /* HT bank sel */ + bank_sel = HT_PORT0_BANK_SEL | HT_PORT1_BANK_SEL << NBL_8BIT | + HT_PORT2_BANK_SEL << NBL_16BIT; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_HT_BANK_SEL_BITMAP, (u8 *)&bank_sel, + sizeof(bank_sel)); + + /* KT bank sel */ + bank_sel = KT_PORT0_BANK_SEL | KT_PORT1_BANK_SEL << NBL_8BIT | + KT_PORT2_BANK_SEL << NBL_16BIT; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_KT_BANK_SEL_BITMAP, (u8 *)&bank_sel, + sizeof(bank_sel)); + + /* AT bank sel */ + bank_sel = AT_PORT0_BANK_SEL | AT_PORT1_BANK_SEL << NBL_16BIT; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_AT_BANK_SEL_BITMAP, (u8 *)&bank_sel, + sizeof(bank_sel)); + bank_sel = AT_PORT2_BANK_SEL; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_AT_BANK_SEL_BITMAP2, (u8 *)&bank_sel, + sizeof(bank_sel)); +} + +static void nbl_hw_fem_clear_tcam_ad(struct nbl_hw_mgt *hw_mgt) +{ + union fem_em_ad_table_u ad_table = { .info = { 0 } }; + union fem_em_tcam_table_u tcam_table; + int i, j; + + memset(&tcam_table, 0, sizeof(tcam_table)); + + for (i = 0; i < NBL_PT_LEN; i++) { + for (j = 0; j < NBL_TCAM_TABLE_LEN; j++) { + nbl_hw_wr_regs(hw_mgt, NBL_FEM_EM_TCAM_TABLE_REG(i, j), + tcam_table.hash_key, sizeof(tcam_table)); + nbl_hw_wr_regs(hw_mgt, NBL_FEM_EM_AD_TABLE_REG(i, j), + ad_table.hash_key, sizeof(ad_table)); + nbl_hw_rd32(hw_mgt, NBL_FEM_EM_TCAM_TABLE_REG(i, 1)); + } + } +} + +static int nbl_hw_set_ht(void *priv, u16 hash, u16 hash_other, u8 ht_table, + u8 bucket, u32 key_index, u8 valid) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + union nbl_fem_ht_acc_data_u ht = { .info = { 0 } }; + union nbl_fem_ht_acc_ctrl_u ht_ctrl = { .info = { 0 } }; + struct nbl_common_info *common; + + common = NBL_HW_MGT_TO_COMMON(hw_mgt); + + ht.info.vld = valid; + ht.info.hash = hash_other; + ht.info.kt_index = key_index; + + ht_ctrl.info.ht_id = ht_table == NBL_HT0 ? NBL_ACC_HT0 : NBL_ACC_HT1; + ht_ctrl.info.entry_id = hash; + ht_ctrl.info.bucket_id = bucket; + ht_ctrl.info.port = NBL_PT_PP0; + ht_ctrl.info.access_size = NBL_ACC_SIZE_32B; + ht_ctrl.info.start = 1; + + if (nbl_send_ht_data(hw_mgt, &ht_ctrl, ht.data, common)) + return -EIO; + + nbl_check_ht_data(hw_mgt, &ht_ctrl, common); + return 0; +} + +static int nbl_hw_set_kt(void *priv, u8 *key, u32 key_index, u8 key_type) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + union nbl_fem_kt_acc_ctrl_u kt_ctrl = { .info = { 0 } }; + struct nbl_common_info *common; + + common = NBL_HW_MGT_TO_COMMON(hw_mgt); + + kt_ctrl.info.addr = key_index; + kt_ctrl.info.access_size = key_type == NBL_KT_HALF_MODE ? + NBL_ACC_SIZE_160B : + NBL_ACC_SIZE_320B; + kt_ctrl.info.start = 1; + + if (nbl_send_kt_data(hw_mgt, &kt_ctrl, key, common)) + return -EIO; + + nbl_check_kt_data(hw_mgt, &kt_ctrl, common); + return 0; +} + +static int nbl_hw_search_key(void *priv, u8 *key, u8 key_type) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_common_info *common; + union nbl_search_ctrl_u s_ctrl = { .info = { 0 } }; + union nbl_search_ack_u s_ack = { .info = { 0 } }; + u8 key_data[NBL_KT_BYTE_LEN] = { 0 }; + u8 search_key[NBL_FEM_SEARCH_KEY_LEN] = { 0 }; + u8 data[NBL_FEM_SEARCH_KEY_LEN] = { 0 }; + u8 times = 3; + + common = NBL_HW_MGT_TO_COMMON(hw_mgt); + + if (key_type == NBL_KT_HALF_MODE) + memcpy(key_data, key, NBL_KT_BYTE_HALF_LEN); + else + memcpy(key_data, key, NBL_KT_BYTE_LEN); + + key_data[0] &= KT_MASK_LEN32_ACTION_INFO; + key_data[1] &= KT_MASK_LEN12_ACTION_INFO; + if (key_type == NBL_KT_HALF_MODE) + memcpy(&search_key[20], key_data, NBL_KT_BYTE_HALF_LEN); + else + memcpy(search_key, key_data, NBL_KT_BYTE_LEN); + + nbl_debug(common, "Search key:0x%x-%x-%x-%x-%x-%x-%x-%x-%x-%x", + ((u32 *)search_key)[9], ((u32 *)search_key)[8], + ((u32 *)search_key)[7], ((u32 *)search_key)[6], + ((u32 *)search_key)[5], ((u32 *)search_key)[4], + ((u32 *)search_key)[3], ((u32 *)search_key)[2], + ((u32 *)search_key)[1], ((u32 *)search_key)[0]); + nbl_hw_wr_regs(hw_mgt, NBL_FEM_INSERT_SEARCH0_DATA, search_key, + NBL_FEM_SEARCH_KEY_LEN); + + s_ctrl.info.start = 1; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_INSERT_SEARCH0_CTRL, (u8 *)&s_ctrl, + NBL_SEARCH_CTRL_WIDTH); + + do { + nbl_hw_rd_regs(hw_mgt, NBL_FEM_INSERT_SEARCH0_ACK, s_ack.data, + NBL_SEARCH_ACK_WIDTH); + nbl_debug(common, "Search key ack:done:%u status:%u.", + s_ack.info.done, s_ack.info.status); + + if (!s_ack.info.done) { + times--; + usleep_range(100, 200); + } else { + nbl_hw_rd_regs(hw_mgt, NBL_FEM_INSERT_SEARCH0_DATA, + data, NBL_FEM_SEARCH_KEY_LEN); + nbl_debug(common, + "Search key data:0x%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x.", + ((u32 *)data)[10], ((u32 *)data)[9], + ((u32 *)data)[8], ((u32 *)data)[7], + ((u32 *)data)[6], ((u32 *)data)[5], + ((u32 *)data)[4], ((u32 *)data)[3], + ((u32 *)data)[2], ((u32 *)data)[1], + ((u32 *)data)[0]); + break; + } + } while (times); + + if (!times) { + nbl_err(common, "Search ht/kt failed."); + return -EAGAIN; + } + + return 0; +} + +static int nbl_hw_add_tcam(void *priv, u32 index, u8 *key, u32 *action, + u8 key_type, u8 pp_type) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + union fem_em_tcam_table_u tcam_table; + union fem_em_tcam_table_u tcam_table_second; + union fem_em_ad_table_u ad_table; + + memset(&tcam_table, 0, sizeof(tcam_table)); + memset(&tcam_table_second, 0, sizeof(tcam_table_second)); + memset(&ad_table, 0, sizeof(ad_table)); + + memcpy(tcam_table.info.key, key, NBL_KT_BYTE_HALF_LEN); + tcam_table.info.key_vld = 1; + + if (key_type == NBL_KT_FULL_MODE) { + tcam_table.info.key_size = 1; + memcpy(tcam_table_second.info.key, &key[5], + NBL_KT_BYTE_HALF_LEN); + tcam_table_second.info.key_vld = 1; + tcam_table_second.info.key_size = 1; + + nbl_hw_wr_regs(hw_mgt, + NBL_FEM_EM_TCAM_TABLE_REG(pp_type, index + 1), + tcam_table_second.hash_key, + NBL_FLOW_TCAM_TOTAL_LEN); + } + nbl_hw_wr_regs(hw_mgt, NBL_FEM_EM_TCAM_TABLE_REG(pp_type, index), + tcam_table.hash_key, NBL_FLOW_TCAM_TOTAL_LEN); + + ad_table.info.action0 = action[0]; + ad_table.info.action1 = action[1]; + ad_table.info.action2 = action[2]; + ad_table.info.action3 = action[3]; + ad_table.info.action4 = action[4]; + ad_table.info.action5 = action[5]; + ad_table.info.action6 = action[6]; + ad_table.info.action7 = action[7]; + ad_table.info.action8 = action[8]; + ad_table.info.action9 = action[9]; + ad_table.info.action10 = action[10]; + ad_table.info.action11 = action[11]; + ad_table.info.action12 = action[12]; + ad_table.info.action13 = action[13]; + ad_table.info.action14 = action[14]; + ad_table.info.action15 = action[15]; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_EM_AD_TABLE_REG(pp_type, index), + ad_table.hash_key, NBL_FLOW_AD_TOTAL_LEN); + + return 0; +} + +static void nbl_hw_del_tcam(void *priv, u32 index, u8 key_type, u8 pp_type) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + union fem_em_tcam_table_u tcam_table; + union fem_em_tcam_table_u tcam_table_second; + union fem_em_ad_table_u ad_table; + + memset(&tcam_table, 0, sizeof(tcam_table)); + memset(&tcam_table_second, 0, sizeof(tcam_table_second)); + memset(&ad_table, 0, sizeof(ad_table)); + if (key_type == NBL_KT_FULL_MODE) + nbl_hw_wr_regs(hw_mgt, + NBL_FEM_EM_TCAM_TABLE_REG(pp_type, index + 1), + tcam_table_second.hash_key, + NBL_FLOW_TCAM_TOTAL_LEN); + nbl_hw_wr_regs(hw_mgt, NBL_FEM_EM_TCAM_TABLE_REG(pp_type, index), + tcam_table.hash_key, NBL_FLOW_TCAM_TOTAL_LEN); + + nbl_hw_wr_regs(hw_mgt, NBL_FEM_EM_AD_TABLE_REG(pp_type, index), + ad_table.hash_key, NBL_FLOW_AD_TOTAL_LEN); +} + +static int nbl_hw_add_mcc(void *priv, u16 mcc_id, u16 prev_mcc_id, + u16 next_mcc_id, u16 action) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_mcc_tbl node = { 0 }; + + node.vld = 1; + if (next_mcc_id == NBL_MCC_ID_INVALID) { + node.next_pntr = 0; + node.tail = 1; + } else { + node.next_pntr = next_mcc_id; + node.tail = 0; + } + + node.stateid_filter = 1; + node.flowid_filter = 1; + node.dport_act = action; + + nbl_hw_wr_regs(hw_mgt, NBL_MCC_LEAF_NODE_TABLE(mcc_id), (u8 *)&node, + sizeof(node)); + if (prev_mcc_id != NBL_MCC_ID_INVALID) { + nbl_hw_rd_regs(hw_mgt, NBL_MCC_LEAF_NODE_TABLE(prev_mcc_id), + (u8 *)&node, sizeof(node)); + node.next_pntr = mcc_id; + node.tail = 0; + nbl_hw_wr_regs(hw_mgt, NBL_MCC_LEAF_NODE_TABLE(prev_mcc_id), + (u8 *)&node, sizeof(node)); + } + + return 0; +} + +static void nbl_hw_del_mcc(void *priv, u16 mcc_id, u16 prev_mcc_id, + u16 next_mcc_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_mcc_tbl node = { 0 }; + + if (prev_mcc_id != NBL_MCC_ID_INVALID) { + nbl_hw_rd_regs(hw_mgt, NBL_MCC_LEAF_NODE_TABLE(prev_mcc_id), + (u8 *)&node, sizeof(node)); + + if (next_mcc_id != NBL_MCC_ID_INVALID) { + node.next_pntr = next_mcc_id; + } else { + node.next_pntr = 0; + node.tail = 1; + } + + nbl_hw_wr_regs(hw_mgt, NBL_MCC_LEAF_NODE_TABLE(prev_mcc_id), + (u8 *)&node, sizeof(node)); + } + + memset(&node, 0, sizeof(node)); + nbl_hw_wr_regs(hw_mgt, NBL_MCC_LEAF_NODE_TABLE(mcc_id), (u8 *)&node, + sizeof(node)); +} + +static void nbl_hw_update_mcc_next_node(void *priv, u16 mcc_id, u16 next_mcc_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_mcc_tbl node = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_MCC_LEAF_NODE_TABLE(mcc_id), (u8 *)&node, + sizeof(node)); + if (next_mcc_id != NBL_MCC_ID_INVALID) { + node.next_pntr = next_mcc_id; + node.tail = 0; + } else { + node.next_pntr = 0; + node.tail = 1; + } + + nbl_hw_wr_regs(hw_mgt, NBL_MCC_LEAF_NODE_TABLE(mcc_id), (u8 *)&node, + sizeof(node)); +} + +static int nbl_hw_init_fem(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + union nbl_fem_ht_size_table_u ht_size = { .info = { 0 } }; + u32 fem_start = NBL_FEM_INIT_START_KERN; + int ret = 0; + + nbl_hw_wr_regs(hw_mgt, NBL_FEM_INIT_START, (u8 *)&fem_start, + sizeof(fem_start)); + + nbl_hw_fem_set_bank(hw_mgt); + + ht_size.info.pp0_size = HT_PORT0_BTM; + ht_size.info.pp1_size = HT_PORT1_BTM; + ht_size.info.pp2_size = HT_PORT2_BTM; + nbl_hw_wr_regs(hw_mgt, NBL_FEM_HT_SIZE_REG, ht_size.data, + NBL_FEM_HT_SIZE_TBL_WIDTH); + + nbl_hw_fem_clear_tcam_ad(hw_mgt); + + return ret; +} + static void nbl_configure_dped_checksum(struct nbl_hw_mgt *hw_mgt) { union dped_l4_ck_cmd_40_u l4_ck_cmd_40; @@ -2007,6 +2468,20 @@ static void nbl_hw_set_coalesce(void *priv, u16 interrupt_id, u16 pnum, (u8 *)&msix_info, sizeof(msix_info)); } +static int nbl_hw_set_vsi_mtu(void *priv, u16 vsi_id, u16 mtu_sel) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_ipro_dn_src_port_tbl dpsport = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_IPRO_DN_SRC_PORT_TABLE(vsi_id), + (u8 *)&dpsport, sizeof(struct nbl_ipro_dn_src_port_tbl)); + dpsport.mtu_sel = mtu_sel; + nbl_hw_wr_regs(hw_mgt, NBL_IPRO_DN_SRC_PORT_TABLE(vsi_id), + (u8 *)&dpsport, sizeof(struct nbl_ipro_dn_src_port_tbl)); + + return 0; +} + static void nbl_hw_config_adminq_rxq(void *priv, dma_addr_t dma_addr, int size_bwid) { @@ -2172,6 +2647,36 @@ static void nbl_hw_set_fw_pong(void *priv, u32 pong) sizeof(pong)); } +static int nbl_hw_set_mtu(void *priv, u16 mtu_index, u16 mtu) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_ipro_mtu_sel ipro_mtu_sel = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_IPRO_MTU_SEL_REG(mtu_index / 2), + (u8 *)&ipro_mtu_sel, sizeof(ipro_mtu_sel)); + + if (mtu_index % 2 == 0) + ipro_mtu_sel.mtu_0 = mtu; + else + ipro_mtu_sel.mtu_1 = mtu; + + nbl_hw_wr_regs(hw_mgt, NBL_IPRO_MTU_SEL_REG(mtu_index / 2), + (u8 *)&ipro_mtu_sel, sizeof(ipro_mtu_sel)); + + return 0; +} + +static u16 nbl_hw_get_mtu_index(void *priv, u16 vsi_id) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + struct nbl_ipro_dn_src_port_tbl ipro_dn_src_port_tbl = { 0 }; + + nbl_hw_rd_regs(hw_mgt, NBL_IPRO_DN_SRC_PORT_TBL_REG(vsi_id), + (u8 *)&ipro_dn_src_port_tbl, + sizeof(ipro_dn_src_port_tbl)); + return ipro_dn_src_port_tbl.mtu_sel; +} + static int nbl_hw_process_abnormal_queue(struct nbl_hw_mgt *hw_mgt, u16 queue_id, int type, struct nbl_abnormal_details *detail) @@ -2431,9 +2936,23 @@ static struct nbl_hw_ops hw_ops = { .save_uvn_ctx = nbl_hw_save_uvn_ctx, .setup_queue_switch = nbl_hw_setup_queue_switch, .init_pfc = nbl_hw_init_pfc, + .set_vsi_mtu = nbl_hw_set_vsi_mtu, + .set_mtu = nbl_hw_set_mtu, + .get_mtu_index = nbl_hw_get_mtu_index, + .configure_msix_map = nbl_hw_configure_msix_map, .configure_msix_info = nbl_hw_configure_msix_info, .set_coalesce = nbl_hw_set_coalesce, + + .set_ht = nbl_hw_set_ht, + .set_kt = nbl_hw_set_kt, + .search_key = nbl_hw_search_key, + .add_tcam = nbl_hw_add_tcam, + .del_tcam = nbl_hw_del_tcam, + .add_mcc = nbl_hw_add_mcc, + .del_mcc = nbl_hw_del_mcc, + .update_mcc_next_node = nbl_hw_update_mcc_next_node, + .init_fem = nbl_hw_init_fem, .update_mailbox_queue_tail_ptr = nbl_hw_update_mailbox_queue_tail_ptr, .config_mailbox_rxq = nbl_hw_config_mailbox_rxq, .config_mailbox_txq = nbl_hw_config_mailbox_txq, diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c index 161ba88a61c0..010a4c1363ed 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c @@ -550,9 +550,14 @@ static int nbl_res_setup_ops(struct device *dev, return -ENOMEM; if (!is_ops_inited) { + ret = nbl_flow_setup_ops_leonis(&res_ops); + if (ret) + goto setup_fail; + ret = nbl_queue_setup_ops_leonis(&res_ops); if (ret) goto setup_fail; + ret = nbl_intr_setup_ops(&res_ops); if (ret) goto setup_fail; @@ -884,6 +889,7 @@ static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis) nbl_intr_mgt_stop(res_mgt); nbl_adminq_mgt_stop(res_mgt); nbl_vsi_mgt_stop(res_mgt); + nbl_flow_mgt_stop_leonis(res_mgt); nbl_res_ctrl_dev_ustore_stats_remove(res_mgt); nbl_res_ctrl_dev_remove_vsi_info(res_mgt); nbl_res_ctrl_dev_remove_eth_info(res_mgt); @@ -936,6 +942,10 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis, if (ret) goto start_fail; + ret = nbl_flow_mgt_start_leonis(res_mgt); + if (ret) + goto start_fail; + ret = nbl_queue_mgt_start(res_mgt); if (ret) goto start_fail; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h index 3763c33db00f..a486d2e64626 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.h @@ -11,6 +11,9 @@ #define NBL_MAX_PF_LEONIS 8 +int nbl_flow_mgt_start_leonis(struct nbl_resource_mgt *res_mgt); +void nbl_flow_mgt_stop_leonis(struct nbl_resource_mgt *res_mgt); +int nbl_flow_setup_ops_leonis(struct nbl_resource_ops *resource_ops); int nbl_queue_setup_ops_leonis(struct nbl_resource_ops *resource_ops); void nbl_queue_remove_ops_leonis(struct nbl_resource_ops *resource_ops); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h index 853bb3022e51..c52a17acc4f3 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h @@ -10,6 +10,93 @@ #include <linux/netdev_features.h> #include "nbl_include.h" +#define NBL_HASH_CFT_MAX 4 +#define NBL_HASH_CFT_AVL 2 + +#define NBL_CRC16_CCITT(data, size) \ + nbl_calc_crc16(data, size, 0x1021, 0x0000, 1, 0x0000) +#define NBL_CRC16_CCITT_FALSE(data, size) \ + nbl_calc_crc16(data, size, 0x1021, 0xFFFF, 0, 0x0000) +#define NBL_CRC16_XMODEM(data, size) \ + nbl_calc_crc16(data, size, 0x1021, 0x0000, 0, 0x0000) +#define NBL_CRC16_IBM(data, size) \ + nbl_calc_crc16(data, size, 0x8005, 0x0000, 1, 0x0000) + +static inline u8 nbl_invert_uint8(const u8 data) +{ + u8 i, result = 0; + + for (i = 0; i < 8; i++) { + if (data & (1 << i)) + result |= 1 << (7 - i); + } + + return result; +} + +static inline u16 nbl_invert_uint16(const u16 data) +{ + u16 i, result = 0; + + for (i = 0; i < 16; i++) { + if (data & (1 << i)) + result |= 1 << (15 - i); + } + + return result; +} + +static inline u16 nbl_calc_crc16(const u8 *data, u32 size, u16 crc_poly, + u16 init_value, u8 ref_flag, u16 xorout) +{ + u16 crc_reg = init_value, tmp = 0; + u8 j, byte = 0; + + while (size--) { + byte = *(data++); + if (ref_flag) + byte = nbl_invert_uint8(byte); + crc_reg ^= byte << 8; + for (j = 0; j < 8; j++) { + tmp = crc_reg & 0x8000; + crc_reg <<= 1; + if (tmp) + crc_reg ^= crc_poly; + } + } + + if (ref_flag) + crc_reg = nbl_invert_uint16(crc_reg); + + crc_reg = crc_reg ^ xorout; + return crc_reg; +} + +static inline u16 nbl_hash_transfer(u16 hash, u16 power, u16 depth) +{ + u16 temp = 0; + u16 val = 0; + u32 val2 = 0; + u16 off = 16 - power; + + temp = (hash >> power); + val = hash << off; + val = val >> off; + + if (depth == 0) { + val = temp + val; + val = val << off; + val = val >> off; + } else { + val2 = val; + val2 *= depth; + val2 = val2 >> power; + val = (u16)val2; + } + + return val; +} + #define nbl_err(common, fmt, ...) \ do { \ typeof(common) _common = (common); \ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h index b8f49cc75bc8..e2c5a865892f 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h @@ -105,10 +105,28 @@ struct nbl_hw_ops { void (*update_adminq_queue_tail_ptr)(void *priv, u16 tail_ptr, u8 txrx); bool (*check_adminq_dma_err)(void *priv, bool tx); + int (*set_vsi_mtu)(void *priv, u16 vsi_id, u16 mtu_sel); + u8 __iomem *(*get_hw_addr)(void *priv, size_t *size); int (*set_sfp_state)(void *priv, u8 eth_id, u8 state); void (*set_hw_status)(void *priv, enum nbl_hw_status hw_status); enum nbl_hw_status (*get_hw_status)(void *priv); + int (*set_mtu)(void *priv, u16 mtu_index, u16 mtu); + u16 (*get_mtu_index)(void *priv, u16 vsi_id); + + int (*set_ht)(void *priv, u16 hash, u16 hash_other, u8 ht_table, + u8 bucket, u32 key_index, u8 valid); + int (*set_kt)(void *priv, u8 *key, u32 key_index, u8 key_type); + int (*search_key)(void *priv, u8 *key, u8 key_type); + int (*add_tcam)(void *priv, u32 index, u8 *key, u32 *action, + u8 key_type, u8 pp_type); + void (*del_tcam)(void *priv, u32 index, u8 key_type, u8 pp_type); + int (*add_mcc)(void *priv, u16 mcc_id, u16 prev_mcc_id, u16 next_mcc_id, + u16 action); + void (*del_mcc)(void *priv, u16 mcc_id, u16 prev_mcc_id, + u16 next_mcc_id); + void (*update_mcc_next_node)(void *priv, u16 mcc_id, u16 next_mcc_id); + int (*init_fem)(void *priv); void (*set_fw_ping)(void *priv, u32 ping); u32 (*get_fw_pong)(void *priv); void (*set_fw_pong)(void *priv, u32 pong); -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 10/15] net/nebula-matrix: add txrx resource definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (8 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 09/15] net/nebula-matrix: add flow " illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 11/15] net/nebula-matrix: add Dispatch layer " illusion.wang ` (5 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list txrx resource management functions include: TX/RX Ring Management: Allocate/release DMA memory and descriptors. Data Transmission: Support TSO (TCP Segmentation Offload), checksum offloading, and VLAN tag insertion. Data Reception: Support NAPI interrupt aggregation, checksum offloading, and VLAN tag stripping. Resource Management: Cache receive buffers and pre-allocate resources. Statistics and Debugging: Collect transmission/reception statistics. Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 1 + .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 27 + .../nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c | 19 + .../nbl_hw_leonis/nbl_resource_leonis.c | 11 + .../nebula-matrix/nbl/nbl_hw/nbl_resource.h | 4 + .../nebula-matrix/nbl/nbl_hw/nbl_txrx.c | 2150 +++++++++++++++++ .../nebula-matrix/nbl/nbl_hw/nbl_txrx.h | 184 ++ .../nbl/nbl_include/nbl_def_hw.h | 4 + .../nbl/nbl_include/nbl_include.h | 5 + 9 files changed, 2405 insertions(+) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index 16d751e01b8e..7e2aebdad098 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -13,6 +13,7 @@ nbl_core-objs += nbl_common/nbl_common.o \ nbl_hw/nbl_hw_leonis/nbl_hw_leonis_regs.o \ nbl_hw/nbl_resource.o \ nbl_hw/nbl_interrupt.o \ + nbl_hw/nbl_txrx.o \ nbl_hw/nbl_queue.o \ nbl_hw/nbl_vsi.o \ nbl_hw/nbl_adminq.o \ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index 6c7e2549ff8b..eef0e76fb9db 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -29,6 +29,23 @@ #define NBL_ADAPTER_TO_RES_PT_OPS(adapter) \ (&(NBL_ADAP_TO_SERV_OPS_TBL(adapter)->pt_ops)) +#define NBL_NETDEV_PRIV_TO_ADAPTER(priv) ((priv)->adapter) + +#define NBL_NETDEV_TO_ADAPTER(netdev) \ + (NBL_NETDEV_PRIV_TO_ADAPTER( \ + (struct nbl_netdev_priv *)netdev_priv(netdev))) + +#define NBL_NETDEV_TO_SERV_MGT(netdev) \ + (NBL_ADAP_TO_SERV_MGT(NBL_NETDEV_PRIV_TO_ADAPTER(\ + (struct nbl_netdev_priv *)netdev_priv(netdev)))) + +#define NBL_NETDEV_TO_DEV_MGT(netdev) \ + (NBL_ADAP_TO_DEV_MGT(NBL_NETDEV_TO_ADAPTER(netdev))) + +#define NBL_NETDEV_TO_COMMON(netdev) \ + (NBL_ADAP_TO_COMMON(NBL_NETDEV_PRIV_TO_ADAPTER(\ + (struct nbl_netdev_priv *)netdev_priv(netdev)))) + #define NBL_CAP_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1) #define NBL_CAP_IS_CTRL(val) NBL_CAP_TEST_BIT(val, NBL_CAP_HAS_CTRL_BIT) @@ -71,6 +88,16 @@ struct nbl_adapter { struct nbl_init_param init_param; }; +struct nbl_netdev_priv { + struct nbl_adapter *adapter; + struct net_device *netdev; + u16 tx_queue_num; + u16 rx_queue_num; + u16 queue_size; + u16 data_vsi; + s64 last_st_time; +}; + struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, struct nbl_init_param *param); void nbl_core_remove(struct nbl_adapter *adapter); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c index 0b15d6365513..78c276acf72f 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_hw_leonis.c @@ -2435,6 +2435,23 @@ static void nbl_hw_cfg_mailbox_qinfo(void *priv, u16 func_id, u16 bus, (u8 *)&mb_qinfo_map, sizeof(mb_qinfo_map)); } +static void nbl_hw_update_tail_ptr(void *priv, struct nbl_notify_param *param) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + u8 __iomem *notify_addr = hw_mgt->hw_addr; + u32 local_qid = param->notify_qid; + u32 tail_ptr = param->tail_ptr; + + writel((((u32)tail_ptr << 16) | (u32)local_qid), notify_addr); +} + +static u8 __iomem *nbl_hw_get_tail_ptr(void *priv) +{ + struct nbl_hw_mgt *hw_mgt = (struct nbl_hw_mgt *)priv; + + return hw_mgt->hw_addr; +} + static void nbl_hw_set_promisc_mode(void *priv, u16 vsi_id, u16 eth_id, u16 mode) { @@ -2980,6 +2997,8 @@ static struct nbl_hw_ops hw_ops = { .update_adminq_queue_tail_ptr = nbl_hw_update_adminq_queue_tail_ptr, .check_adminq_dma_err = nbl_hw_check_adminq_dma_err, + .update_tail_ptr = nbl_hw_update_tail_ptr, + .get_tail_ptr = nbl_hw_get_tail_ptr, .get_hw_addr = nbl_hw_get_hw_addr, .set_fw_ping = nbl_hw_set_fw_ping, .get_fw_pong = nbl_hw_get_fw_pong, diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c index 010a4c1363ed..8042172ce11f 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_hw_leonis/nbl_resource_leonis.c @@ -558,6 +558,10 @@ static int nbl_res_setup_ops(struct device *dev, if (ret) goto setup_fail; + ret = nbl_txrx_setup_ops(&res_ops); + if (ret) + goto setup_fail; + ret = nbl_intr_setup_ops(&res_ops); if (ret) goto setup_fail; @@ -886,6 +890,7 @@ static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis) struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt; nbl_queue_mgt_stop(res_mgt); + nbl_txrx_mgt_stop(res_mgt); nbl_intr_mgt_stop(res_mgt); nbl_adminq_mgt_stop(res_mgt); nbl_vsi_mgt_stop(res_mgt); @@ -971,6 +976,12 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis, nbl_res_set_fix_capability(res_mgt, NBL_NEED_DESTROY_CHIP); } + if (caps.has_net) { + ret = nbl_txrx_mgt_start(res_mgt); + if (ret) + goto start_fail; + } + nbl_res_set_fix_capability(res_mgt, NBL_TASK_CLEAN_MAILBOX_CAP); nbl_res_set_fix_capability(res_mgt, NBL_TASK_RESET_CAP); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h index de6307d13480..3460a424f21e 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_resource.h @@ -855,6 +855,10 @@ int nbl_intr_setup_ops(struct nbl_resource_ops *resource_ops); int nbl_queue_mgt_start(struct nbl_resource_mgt *res_mgt); void nbl_queue_mgt_stop(struct nbl_resource_mgt *res_mgt); +int nbl_txrx_mgt_start(struct nbl_resource_mgt *res_mgt); +void nbl_txrx_mgt_stop(struct nbl_resource_mgt *res_mgt); +int nbl_txrx_setup_ops(struct nbl_resource_ops *resource_ops); + int nbl_vsi_mgt_start(struct nbl_resource_mgt *res_mgt); void nbl_vsi_mgt_stop(struct nbl_resource_mgt *res_mgt); int nbl_vsi_setup_ops(struct nbl_resource_ops *resource_ops); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c new file mode 100644 index 000000000000..11999906c102 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.c @@ -0,0 +1,2150 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +#include <linux/etherdevice.h> +#include <linux/ip.h> +#include <linux/ipv6.h> +#include <net/ipv6.h> +#include <linux/sctp.h> +#include <linux/if_vlan.h> +#include <net/page_pool/helpers.h> + +#include "nbl_txrx.h" + +static bool nbl_txrx_within_vsi(struct nbl_txrx_vsi_info *vsi_info, + u16 ring_index) +{ + return ring_index >= vsi_info->ring_offset && + ring_index < vsi_info->ring_offset + vsi_info->ring_num; +} + +static struct netdev_queue *txring_txq(const struct nbl_res_tx_ring *ring) +{ + return netdev_get_tx_queue(ring->netdev, ring->queue_index); +} + +static struct nbl_res_tx_ring * +nbl_alloc_tx_ring(struct nbl_resource_mgt *res_mgt, struct net_device *netdev, + u16 ring_index, u16 desc_num) +{ + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_res_tx_ring *ring; + + ring = devm_kzalloc(dev, sizeof(struct nbl_res_tx_ring), GFP_KERNEL); + if (!ring) + return NULL; + + ring->vsi_info = txrx_mgt->vsi_info; + ring->dma_dev = common->dma_dev; + ring->product_type = common->product_type; + ring->eth_id = common->eth_id; + ring->queue_index = ring_index; + ring->notify_addr = (u8 __iomem *) + hw_ops->get_tail_ptr(NBL_RES_MGT_TO_HW_PRIV(res_mgt)); + ring->notify_qid = NBL_RES_NOFITY_QID(res_mgt, ring_index * 2 + 1); + ring->netdev = netdev; + ring->desc_num = desc_num; + ring->used_wrap_counter = 1; + ring->avail_used_flags |= BIT(NBL_PACKED_DESC_F_AVAIL); + + return ring; +} + +static int nbl_alloc_tx_rings(struct nbl_resource_mgt *res_mgt, + struct net_device *netdev, u16 tx_num, + u16 desc_num) +{ + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_res_tx_ring *ring; + u32 ring_index; + + if (txrx_mgt->tx_rings) { + netif_err(common, drv, netdev, + "Try to allocate tx_rings which already exists\n"); + return -EINVAL; + } + + txrx_mgt->tx_ring_num = tx_num; + + txrx_mgt->tx_rings = devm_kcalloc(dev, tx_num, + sizeof(struct nbl_res_tx_ring *), + GFP_KERNEL); + if (!txrx_mgt->tx_rings) + return -ENOMEM; + + for (ring_index = 0; ring_index < tx_num; ring_index++) { + ring = txrx_mgt->tx_rings[ring_index]; + WARN_ON(ring); + ring = nbl_alloc_tx_ring(res_mgt, netdev, ring_index, desc_num); + if (!ring) + goto alloc_tx_ring_failed; + + WRITE_ONCE(txrx_mgt->tx_rings[ring_index], ring); + } + + return 0; + +alloc_tx_ring_failed: + while (ring_index--) + devm_kfree(dev, txrx_mgt->tx_rings[ring_index]); + devm_kfree(dev, txrx_mgt->tx_rings); + txrx_mgt->tx_rings = NULL; + return -ENOMEM; +} + +static void nbl_free_tx_rings(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + struct nbl_res_tx_ring *ring; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + u16 ring_count; + u16 ring_index; + + ring_count = txrx_mgt->tx_ring_num; + for (ring_index = 0; ring_index < ring_count; ring_index++) { + ring = txrx_mgt->tx_rings[ring_index]; + devm_kfree(dev, ring); + } + devm_kfree(dev, txrx_mgt->tx_rings); + txrx_mgt->tx_rings = NULL; +} + +static int nbl_alloc_rx_rings(struct nbl_resource_mgt *res_mgt, + struct net_device *netdev, u16 rx_num, + u16 desc_num) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_res_rx_ring *ring; + u32 ring_index; + + if (txrx_mgt->rx_rings) { + netif_err(common, drv, netdev, + "Try to allocate rx_rings which already exists\n"); + return -EINVAL; + } + + txrx_mgt->rx_ring_num = rx_num; + + txrx_mgt->rx_rings = devm_kcalloc(dev, rx_num, + sizeof(struct nbl_res_rx_ring *), + GFP_KERNEL); + if (!txrx_mgt->rx_rings) + return -ENOMEM; + + for (ring_index = 0; ring_index < rx_num; ring_index++) { + ring = txrx_mgt->rx_rings[ring_index]; + WARN_ON(ring); + ring = devm_kzalloc(dev, sizeof(struct nbl_res_rx_ring), + GFP_KERNEL); + if (!ring) + goto alloc_rx_ring_failed; + + ring->common = common; + ring->txrx_mgt = txrx_mgt; + ring->dma_dev = common->dma_dev; + ring->queue_index = ring_index; + ring->notify_qid = NBL_RES_NOFITY_QID(res_mgt, ring_index * 2); + ring->netdev = netdev; + ring->desc_num = desc_num; + /* RX buffer length is determined by mtu, + * when netdev up we will set buf_len according to its mtu + */ + ring->buf_len = PAGE_SIZE / 2 - NBL_RX_PAD; + + ring->used_wrap_counter = 1; + ring->avail_used_flags |= BIT(NBL_PACKED_DESC_F_AVAIL); + WRITE_ONCE(txrx_mgt->rx_rings[ring_index], ring); + } + + return 0; + +alloc_rx_ring_failed: + while (ring_index--) + devm_kfree(dev, txrx_mgt->rx_rings[ring_index]); + devm_kfree(dev, txrx_mgt->rx_rings); + txrx_mgt->rx_rings = NULL; + return -ENOMEM; +} + +static void nbl_free_rx_rings(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + struct nbl_res_rx_ring *ring; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + u16 ring_count; + u16 ring_index; + + ring_count = txrx_mgt->rx_ring_num; + for (ring_index = 0; ring_index < ring_count; ring_index++) { + ring = txrx_mgt->rx_rings[ring_index]; + devm_kfree(dev, ring); + } + devm_kfree(dev, txrx_mgt->rx_rings); + txrx_mgt->rx_rings = NULL; +} + +static int nbl_alloc_vectors(struct nbl_resource_mgt *res_mgt, u16 num) +{ + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct nbl_res_vector *vector; + u32 index; + + if (txrx_mgt->vectors) { + nbl_err(common, + "Try to allocate vectors which already exists\n"); + return -EINVAL; + } + + txrx_mgt->vectors = devm_kcalloc(dev, num, + sizeof(struct nbl_res_vector *), + GFP_KERNEL); + if (!txrx_mgt->vectors) + return -ENOMEM; + + for (index = 0; index < num; index++) { + vector = txrx_mgt->vectors[index]; + WARN_ON(vector); + vector = devm_kzalloc(dev, sizeof(struct nbl_res_vector), + GFP_KERNEL); + if (!vector) + goto alloc_vector_failed; + + vector->rx_ring = txrx_mgt->rx_rings[index]; + vector->tx_ring = txrx_mgt->tx_rings[index]; + WRITE_ONCE(txrx_mgt->vectors[index], vector); + } + return 0; + +alloc_vector_failed: + while (index--) + devm_kfree(dev, txrx_mgt->vectors[index]); + devm_kfree(dev, txrx_mgt->vectors); + txrx_mgt->vectors = NULL; + return -ENOMEM; +} + +static void nbl_free_vectors(struct nbl_resource_mgt *res_mgt) +{ + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + struct nbl_res_vector *vector; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + u16 count, index; + + count = txrx_mgt->rx_ring_num; + for (index = 0; index < count; index++) { + vector = txrx_mgt->vectors[index]; + devm_kfree(dev, vector); + } + devm_kfree(dev, txrx_mgt->vectors); + txrx_mgt->vectors = NULL; +} + +static int nbl_res_txrx_alloc_rings(void *priv, struct net_device *netdev, + struct nbl_ring_param *param) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + int err = 0; + + err = nbl_alloc_tx_rings(res_mgt, netdev, param->tx_ring_num, + param->queue_size); + if (err) + return err; + + err = nbl_alloc_rx_rings(res_mgt, netdev, param->rx_ring_num, + param->queue_size); + if (err) + goto alloc_rx_rings_err; + + err = nbl_alloc_vectors(res_mgt, param->rx_ring_num); + if (err) + goto alloc_vectors_err; + + nbl_info(res_mgt->common, "Alloc rings for %d tx, %d rx, %d desc\n", + param->tx_ring_num, param->rx_ring_num, param->queue_size); + return 0; + +alloc_vectors_err: + nbl_free_rx_rings(res_mgt); +alloc_rx_rings_err: + nbl_free_tx_rings(res_mgt); + return err; +} + +static void nbl_res_txrx_remove_rings(void *priv) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + + nbl_free_vectors(res_mgt); + nbl_free_tx_rings(res_mgt); + nbl_free_rx_rings(res_mgt); + nbl_debug(res_mgt->common, "Remove rings"); +} + +static dma_addr_t nbl_res_txrx_start_tx_ring(void *priv, u8 ring_index) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct device *dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt); + struct nbl_res_tx_ring *tx_ring = + NBL_RES_MGT_TO_TX_RING(res_mgt, ring_index); + + if (tx_ring->tx_bufs) { + nbl_err(res_mgt->common, + "Try to setup a TX ring with buffer management array already allocated\n"); + return (dma_addr_t)NULL; + } + + tx_ring->tx_bufs = devm_kcalloc(dev, tx_ring->desc_num, + sizeof(*tx_ring->tx_bufs), GFP_KERNEL); + if (!tx_ring->tx_bufs) + return (dma_addr_t)NULL; + + /* Alloc twice memory, and second half is used to back up the desc + *for desc checking + */ + tx_ring->size = ALIGN(tx_ring->desc_num * sizeof(struct nbl_ring_desc), + PAGE_SIZE); + tx_ring->desc = dmam_alloc_coherent(dma_dev, tx_ring->size, + &tx_ring->dma, + GFP_KERNEL | __GFP_ZERO); + if (!tx_ring->desc) + goto alloc_dma_err; + + tx_ring->next_to_use = 0; + tx_ring->next_to_clean = 0; + tx_ring->tail_ptr = 0; + + tx_ring->valid = true; + nbl_debug(res_mgt->common, "Start tx ring %d", ring_index); + return tx_ring->dma; + +alloc_dma_err: + devm_kfree(dev, tx_ring->tx_bufs); + tx_ring->tx_bufs = NULL; + tx_ring->size = 0; + return (dma_addr_t)NULL; +} + +static __always_inline bool nbl_rx_cache_get(struct nbl_res_rx_ring *rx_ring, + struct nbl_dma_info *dma_info) +{ + struct nbl_page_cache *cache = &rx_ring->page_cache; + struct nbl_rx_queue_stats *stats = &rx_ring->rx_stats; + + if (unlikely(cache->head == cache->tail)) { + stats->rx_cache_empty++; + return false; + } + + if (page_ref_count(cache->page_cache[cache->head].page) != 1) { + stats->rx_cache_busy++; + return false; + } + + *dma_info = cache->page_cache[cache->head]; + cache->head = (cache->head + 1) & (NBL_MAX_CACHE_SIZE - 1); + stats->rx_cache_reuse++; + + dma_sync_single_for_device(rx_ring->dma_dev, dma_info->addr, + dma_info->size, DMA_FROM_DEVICE); + return true; +} + +static __always_inline int nbl_page_alloc_pool(struct nbl_res_rx_ring *rx_ring, + struct nbl_dma_info *dma_info) +{ + if (nbl_rx_cache_get(rx_ring, dma_info)) + return 0; + + dma_info->page = page_pool_dev_alloc_pages(rx_ring->page_pool); + if (unlikely(!dma_info->page)) + return -ENOMEM; + + dma_info->addr = dma_map_page_attrs(rx_ring->dma_dev, dma_info->page, 0, + dma_info->size, DMA_FROM_DEVICE, + NBL_RX_DMA_ATTR); + + if (unlikely(dma_mapping_error(rx_ring->dma_dev, dma_info->addr))) { + page_pool_recycle_direct(rx_ring->page_pool, dma_info->page); + dma_info->page = NULL; + return -ENOMEM; + } + + return 0; +} + +static __always_inline int nbl_get_rx_frag(struct nbl_res_rx_ring *rx_ring, + struct nbl_rx_buffer *buffer) +{ + int err = 0; + + /* first buffer alloc page */ + if (buffer->first_in_page) + err = nbl_page_alloc_pool(rx_ring, buffer->di); + + return err; +} + +static __always_inline bool nbl_alloc_rx_bufs(struct nbl_res_rx_ring *rx_ring, + u16 count) +{ + u32 buf_len; + u16 next_to_use, head; + __le16 head_flags = 0; + struct nbl_ring_desc *rx_desc, *head_desc; + struct nbl_rx_buffer *rx_buf; + int i; + + if (unlikely(!rx_ring || !count)) { + nbl_warn(NBL_RING_TO_COMMON(rx_ring), + "invalid input parameters, rx_ring is %p, count is %d.\n", + rx_ring, count); + return -EINVAL; + } + + buf_len = rx_ring->buf_len; + next_to_use = rx_ring->next_to_use; + + head = next_to_use; + head_desc = NBL_RX_DESC(rx_ring, next_to_use); + rx_desc = NBL_RX_DESC(rx_ring, next_to_use); + rx_buf = NBL_RX_BUF(rx_ring, next_to_use); + + if (unlikely(!rx_desc || !rx_buf)) { + nbl_warn(NBL_RING_TO_COMMON(rx_ring), + "invalid input parameters, next_to_use:%d, rx_desc is %p, rx_buf is %p.\n", + next_to_use, rx_desc, rx_buf); + return -EINVAL; + } + + do { + if (nbl_get_rx_frag(rx_ring, rx_buf)) + break; + + for (i = 0; i < rx_ring->frags_num_per_page; + i++, rx_desc++, rx_buf++) { + rx_desc->addr = + cpu_to_le64(rx_buf->di->addr + rx_buf->offset); + rx_desc->len = cpu_to_le32(buf_len); + rx_desc->id = cpu_to_le16(next_to_use); + + if (likely(head != next_to_use || i)) + rx_desc->flags = + cpu_to_le16(rx_ring->avail_used_flags | + NBL_PACKED_DESC_F_WRITE); + else + head_flags = + cpu_to_le16(rx_ring->avail_used_flags | + NBL_PACKED_DESC_F_WRITE); + } + + next_to_use += rx_ring->frags_num_per_page; + rx_ring->tail_ptr += rx_ring->frags_num_per_page; + count -= rx_ring->frags_num_per_page; + if (next_to_use == rx_ring->desc_num) { + next_to_use = 0; + rx_desc = NBL_RX_DESC(rx_ring, next_to_use); + rx_buf = NBL_RX_BUF(rx_ring, next_to_use); + rx_ring->avail_used_flags ^= + BIT(NBL_PACKED_DESC_F_AVAIL) | + BIT(NBL_PACKED_DESC_F_USED); + } + } while (count); + + if (next_to_use != head) { + /* wmb */ + wmb(); + head_desc->flags = head_flags; + rx_ring->next_to_use = next_to_use; + } + + return !!count; +} + +static void nbl_unmap_and_free_tx_resource(struct nbl_res_tx_ring *ring, + struct nbl_tx_buffer *tx_buffer, + bool free, bool in_napi) +{ + struct device *dma_dev = NBL_RING_TO_DMA_DEV(ring); + + if (tx_buffer->skb) { + if (likely(free)) { + if (in_napi) + napi_consume_skb(tx_buffer->skb, + NBL_TX_POLL_WEIGHT); + else + dev_kfree_skb_any(tx_buffer->skb); + } + + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_single(dma_dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } else if (tx_buffer->page && dma_unmap_len(tx_buffer, len)) { + dma_unmap_page(dma_dev, dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), DMA_TO_DEVICE); + } else if (dma_unmap_len(tx_buffer, len)) { + dma_unmap_single(dma_dev, dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), DMA_TO_DEVICE); + } + + tx_buffer->next_to_watch = NULL; + tx_buffer->skb = NULL; + tx_buffer->page = 0; + tx_buffer->bytecount = 0; + tx_buffer->gso_segs = 0; + dma_unmap_len_set(tx_buffer, len, 0); +} + +static void nbl_free_tx_ring_bufs(struct nbl_res_tx_ring *tx_ring) +{ + struct nbl_tx_buffer *tx_buffer; + u16 i; + + i = tx_ring->next_to_clean; + tx_buffer = NBL_TX_BUF(tx_ring, i); + while (i != tx_ring->next_to_use) { + nbl_unmap_and_free_tx_resource(tx_ring, tx_buffer, true, false); + i++; + tx_buffer++; + if (i == tx_ring->desc_num) { + i = 0; + tx_buffer = NBL_TX_BUF(tx_ring, i); + } + } + + tx_ring->next_to_clean = 0; + tx_ring->next_to_use = 0; + tx_ring->tail_ptr = 0; + + tx_ring->used_wrap_counter = 1; + tx_ring->avail_used_flags = BIT(NBL_PACKED_DESC_F_AVAIL); + memset(tx_ring->desc, 0, tx_ring->size); +} + +static void nbl_res_txrx_stop_tx_ring(void *priv, u8 ring_index) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct device *dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt); + struct nbl_res_tx_ring *tx_ring = + NBL_RES_MGT_TO_TX_RING(res_mgt, ring_index); + struct nbl_res_vector *vector = + NBL_RES_MGT_TO_VECTOR(res_mgt, ring_index); + + vector->started = false; + /* Flush napi task, to ensue the sched napi finish. So napi will no to + * access the ring memory(wild point), bacause the vector->started has + * set false. + */ + napi_synchronize(&vector->nbl_napi.napi); + tx_ring->valid = false; + + nbl_free_tx_ring_bufs(tx_ring); + WRITE_ONCE(NBL_RES_MGT_TO_TX_RING(res_mgt, ring_index), tx_ring); + + devm_kfree(dev, tx_ring->tx_bufs); + tx_ring->tx_bufs = NULL; + + dmam_free_coherent(dma_dev, tx_ring->size, tx_ring->desc, tx_ring->dma); + tx_ring->desc = NULL; + tx_ring->dma = (dma_addr_t)NULL; + tx_ring->size = 0; + + if (nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA], + tx_ring->queue_index)) + netdev_tx_reset_queue(txring_txq(tx_ring)); + + nbl_debug(res_mgt->common, "Stop tx ring %d", ring_index); +} + +static __always_inline bool nbl_dev_page_is_reusable(struct page *page, u8 nid) +{ + return likely(page_to_nid(page) == nid && !page_is_pfmemalloc(page)); +} + +static __always_inline int nbl_rx_cache_put(struct nbl_res_rx_ring *rx_ring, + struct nbl_dma_info *dma_info) +{ + struct nbl_page_cache *cache = &rx_ring->page_cache; + u32 tail_next = (cache->tail + 1) & (NBL_MAX_CACHE_SIZE - 1); + struct nbl_rx_queue_stats *stats = &rx_ring->rx_stats; + + if (tail_next == cache->head) { + stats->rx_cache_full++; + return 0; + } + + if (!nbl_dev_page_is_reusable(dma_info->page, rx_ring->nid)) { + stats->rx_cache_waive++; + return 1; + } + + cache->page_cache[cache->tail] = *dma_info; + cache->tail = tail_next; + + return 2; +} + +static __always_inline void +nbl_page_release_dynamic(struct nbl_res_rx_ring *rx_ring, + struct nbl_dma_info *dma_info, bool recycle) +{ + u32 ret; + + if (likely(recycle)) { + ret = nbl_rx_cache_put(rx_ring, dma_info); + if (ret == 2) + return; + if (ret == 1) + goto free_page; + dma_unmap_page_attrs(rx_ring->dma_dev, dma_info->addr, + dma_info->size, DMA_FROM_DEVICE, + NBL_RX_DMA_ATTR); + page_pool_recycle_direct(rx_ring->page_pool, dma_info->page); + + return; + } +free_page: + dma_unmap_page_attrs(rx_ring->dma_dev, dma_info->addr, dma_info->size, + DMA_FROM_DEVICE, NBL_RX_DMA_ATTR); + page_pool_put_page(rx_ring->page_pool, dma_info->page, dma_info->size, + true); +} + +static __always_inline void nbl_put_rx_frag(struct nbl_res_rx_ring *rx_ring, + struct nbl_rx_buffer *buffer, + bool recycle) +{ + if (buffer->last_in_page) + nbl_page_release_dynamic(rx_ring, buffer->di, recycle); +} + +static void nbl_free_rx_ring_bufs(struct nbl_res_rx_ring *rx_ring) +{ + struct nbl_rx_buffer *rx_buf; + u16 i; + + i = rx_ring->next_to_clean; + rx_buf = NBL_RX_BUF(rx_ring, i); + while (i != rx_ring->next_to_use) { + nbl_put_rx_frag(rx_ring, rx_buf, false); + i++; + rx_buf++; + if (i == rx_ring->desc_num) { + i = 0; + rx_buf = NBL_RX_BUF(rx_ring, i); + } + } + + for (i = rx_ring->page_cache.head; i != rx_ring->page_cache.tail; + i = (i + 1) & (NBL_MAX_CACHE_SIZE - 1)) { + struct nbl_dma_info *dma_info = + &rx_ring->page_cache.page_cache[i]; + + nbl_page_release_dynamic(rx_ring, dma_info, false); + } + + rx_ring->next_to_clean = 0; + rx_ring->next_to_use = 0; + rx_ring->tail_ptr = 0; + rx_ring->page_cache.head = 0; + rx_ring->page_cache.tail = 0; + + rx_ring->used_wrap_counter = 1; + rx_ring->avail_used_flags = BIT(NBL_PACKED_DESC_F_AVAIL); + memset(rx_ring->desc, 0, rx_ring->size); +} + +static dma_addr_t nbl_res_txrx_start_rx_ring(void *priv, u8 ring_index, + bool use_napi) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct device *dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt); + struct nbl_res_rx_ring *rx_ring = + NBL_RES_MGT_TO_RX_RING(res_mgt, ring_index); + struct nbl_res_vector *vector = + NBL_RES_MGT_TO_VECTOR(res_mgt, ring_index); + struct page_pool_params pp_params = { 0 }; + int pkt_len, hw_mtu, max_linear_len; + int buf_size; + int order = 0; + int i, j; + u16 rx_pad, tailroom; + size_t size; + + if (rx_ring->rx_bufs) { + netif_err(common, drv, rx_ring->netdev, + "Try to setup a RX ring with buffer management array already allocated\n"); + return (dma_addr_t)NULL; + } + hw_mtu = rx_ring->netdev->mtu + NBL_PKT_HDR_PAD + NBL_BUFFER_HDR_LEN; + tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + rx_pad = NBL_RX_PAD; + max_linear_len = NBL_RX_BUFSZ; + pkt_len = SKB_DATA_ALIGN(hw_mtu + rx_pad) + tailroom; + rx_ring->linear_skb = true; + if (pkt_len > max_linear_len) { + rx_ring->linear_skb = false; + rx_pad = 0; + tailroom = 0; + pkt_len = SKB_DATA_ALIGN(hw_mtu); + } + buf_size = NBL_RX_BUFSZ; + WARN_ON(buf_size > PAGE_SIZE); + rx_ring->frags_num_per_page = (PAGE_SIZE * (1 << order)) / buf_size; + WARN_ON(rx_ring->frags_num_per_page > NBL_MAX_BATCH_DESC); + rx_ring->buf_len = buf_size - rx_pad - tailroom; + + pp_params.order = order; + pp_params.flags = 0; + pp_params.pool_size = rx_ring->desc_num; + pp_params.nid = dev_to_node(dev); + pp_params.dev = dev; + pp_params.dma_dir = DMA_FROM_DEVICE; + + if (dev_to_node(dev) == NUMA_NO_NODE) + rx_ring->nid = 0; + else + rx_ring->nid = dev_to_node(dev); + + rx_ring->page_pool = page_pool_create(&pp_params); + if (IS_ERR(rx_ring->page_pool)) { + netif_err(common, drv, rx_ring->netdev, + "Page_pool Allocate %u Failed failed\n", + rx_ring->queue_index); + return (dma_addr_t)NULL; + } + size = array_size(rx_ring->desc_num / rx_ring->frags_num_per_page, + sizeof(struct nbl_dma_info)); + rx_ring->di = kvzalloc_node(size, GFP_KERNEL, dev_to_node(dev)); + if (!rx_ring->di) { + netif_err(common, drv, rx_ring->netdev, + "Dma info Allocate %u Failed failed\n", + rx_ring->queue_index); + goto alloc_di_err; + } + + rx_ring->rx_bufs = devm_kcalloc(dev, rx_ring->desc_num, + sizeof(*rx_ring->rx_bufs), GFP_KERNEL); + if (!rx_ring->rx_bufs) + goto alloc_buffers_err; + + /* Alloc twice memory, and second half is used to back up the desc + * for desc checking + */ + rx_ring->size = ALIGN(rx_ring->desc_num * sizeof(struct nbl_ring_desc), + PAGE_SIZE); + rx_ring->desc = dmam_alloc_coherent(dma_dev, rx_ring->size, + &rx_ring->dma, + GFP_KERNEL | __GFP_ZERO); + if (!rx_ring->desc) { + netif_err(common, drv, rx_ring->netdev, + "Allocate %u bytes descriptor DMA memory for RX queue %u failed\n", + rx_ring->size, rx_ring->queue_index); + goto alloc_dma_err; + } + + rx_ring->next_to_use = 0; + rx_ring->next_to_clean = 0; + rx_ring->tail_ptr = 0; + + j = 0; + for (i = 0; i < rx_ring->desc_num / rx_ring->frags_num_per_page; i++) { + struct nbl_dma_info *di = &rx_ring->di[i]; + struct nbl_rx_buffer *buffer = &rx_ring->rx_bufs[j]; + int f; + + di->size = (PAGE_SIZE * (1 << order)); + for (f = 0; f < rx_ring->frags_num_per_page; f++, j++) { + buffer = &rx_ring->rx_bufs[j]; + buffer->di = di; + buffer->size = buf_size; + buffer->offset = rx_pad + f * buf_size; + buffer->rx_pad = rx_pad; + buffer->first_in_page = (f == 0); + buffer->last_in_page = + (f == rx_ring->frags_num_per_page - 1); + } + } + + if (nbl_alloc_rx_bufs(rx_ring, rx_ring->desc_num - NBL_MAX_BATCH_DESC)) + goto alloc_rx_bufs_err; + + rx_ring->valid = true; + if (use_napi && vector) + vector->started = true; + + netif_dbg(common, drv, rx_ring->netdev, "Start rx ring %d", ring_index); + return rx_ring->dma; + +alloc_rx_bufs_err: + nbl_free_rx_ring_bufs(rx_ring); + dmam_free_coherent(dma_dev, rx_ring->size, rx_ring->desc, rx_ring->dma); + rx_ring->desc = NULL; + rx_ring->dma = (dma_addr_t)NULL; +alloc_dma_err: + devm_kfree(dev, rx_ring->rx_bufs); + rx_ring->rx_bufs = NULL; +alloc_buffers_err: + kvfree(rx_ring->di); +alloc_di_err: + page_pool_destroy(rx_ring->page_pool); + rx_ring->size = 0; + return (dma_addr_t)NULL; +} + +static void nbl_res_txrx_stop_rx_ring(void *priv, u8 ring_index) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct device *dev = NBL_RES_MGT_TO_DEV(res_mgt); + struct device *dma_dev = NBL_RES_MGT_TO_DMA_DEV(res_mgt); + struct nbl_res_rx_ring *rx_ring = + NBL_RES_MGT_TO_RX_RING(res_mgt, ring_index); + + rx_ring->valid = false; + + nbl_free_rx_ring_bufs(rx_ring); + WRITE_ONCE(NBL_RES_MGT_TO_RX_RING(res_mgt, ring_index), rx_ring); + + devm_kfree(dev, rx_ring->rx_bufs); + kvfree(rx_ring->di); + rx_ring->rx_bufs = NULL; + + dmam_free_coherent(dma_dev, rx_ring->size, rx_ring->desc, rx_ring->dma); + rx_ring->desc = NULL; + rx_ring->dma = (dma_addr_t)NULL; + rx_ring->size = 0; + + page_pool_destroy(rx_ring->page_pool); + + nbl_debug(res_mgt->common, "Stop rx ring %d", ring_index); +} + +static __always_inline bool nbl_ring_desc_used(struct nbl_ring_desc *ring_desc, + bool used_wrap_counter) +{ + bool avail; + bool used; + u16 flags; + + flags = le16_to_cpu(ring_desc->flags); + avail = !!(flags & BIT(NBL_PACKED_DESC_F_AVAIL)); + used = !!(flags & BIT(NBL_PACKED_DESC_F_USED)); + + return avail == used && used == used_wrap_counter; +} + +static int nbl_res_txrx_clean_tx_irq(struct nbl_res_tx_ring *tx_ring) +{ + struct nbl_tx_buffer *tx_buffer; + struct nbl_ring_desc *tx_desc; + unsigned int i = tx_ring->next_to_clean; + unsigned int total_tx_pkts = 0; + unsigned int total_tx_bytes = 0; + unsigned int total_tx_descs = 0; + int count = 64; + + tx_buffer = NBL_TX_BUF(tx_ring, i); + tx_desc = NBL_TX_DESC(tx_ring, i); + i -= tx_ring->desc_num; + + do { + struct nbl_ring_desc *end_desc = tx_buffer->next_to_watch; + + if (!end_desc) + break; + + /* smp_rmb */ + smp_rmb(); + + if (!nbl_ring_desc_used(tx_desc, tx_ring->used_wrap_counter)) + break; + + total_tx_pkts += tx_buffer->gso_segs; + total_tx_bytes += tx_buffer->bytecount; + + while (true) { + total_tx_descs++; + nbl_unmap_and_free_tx_resource(tx_ring, tx_buffer, true, + true); + if (tx_desc == end_desc) + break; + i++; + tx_buffer++; + tx_desc++; + if (unlikely(!i)) { + i -= tx_ring->desc_num; + tx_buffer = NBL_TX_BUF(tx_ring, 0); + tx_desc = NBL_TX_DESC(tx_ring, 0); + tx_ring->used_wrap_counter ^= 1; + } + } + + tx_buffer++; + tx_desc++; + i++; + if (unlikely(!i)) { + i -= tx_ring->desc_num; + tx_buffer = NBL_TX_BUF(tx_ring, 0); + tx_desc = NBL_TX_DESC(tx_ring, 0); + tx_ring->used_wrap_counter ^= 1; + } + + prefetch(tx_desc); + + } while (--count); + + i += tx_ring->desc_num; + + tx_ring->next_to_clean = i; + + u64_stats_update_begin(&tx_ring->syncp); + tx_ring->stats.bytes += total_tx_bytes; + tx_ring->stats.packets += total_tx_pkts; + tx_ring->stats.descs += total_tx_descs; + u64_stats_update_end(&tx_ring->syncp); + if (nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA], + tx_ring->queue_index)) + netdev_tx_completed_queue(txring_txq(tx_ring), total_tx_pkts, + total_tx_bytes); + +#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2) + if (unlikely(total_tx_pkts && netif_carrier_ok(tx_ring->netdev) && + nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA], + tx_ring->queue_index) && + (nbl_unused_tx_desc_count(tx_ring) >= TX_WAKE_THRESHOLD))) { + /* Make sure that anybody stopping the queue after this + * sees the new next_to_clean. + */ + smp_mb(); + + if (__netif_subqueue_stopped(tx_ring->netdev, + tx_ring->queue_index)) { + netif_wake_subqueue(tx_ring->netdev, + tx_ring->queue_index); + dev_dbg(NBL_RING_TO_DEV(tx_ring), "wake queue %u\n", + tx_ring->queue_index); + } + } + + return count; +} + +static void nbl_rx_csum(struct nbl_res_rx_ring *rx_ring, struct sk_buff *skb, + struct nbl_rx_extend_head *hdr) +{ + skb->ip_summed = CHECKSUM_NONE; + skb_checksum_none_assert(skb); + + /* if user disable rx csum Offload, then stack verify the rx csum */ + if (!(rx_ring->netdev->features & NETIF_F_RXCSUM)) + return; + + if (!hdr->checksum_status) + return; + + if (hdr->error_code) { + rx_ring->rx_stats.rx_csum_errors++; + return; + } + + skb->ip_summed = CHECKSUM_UNNECESSARY; + rx_ring->rx_stats.rx_csum_packets++; +} + +static __always_inline void nbl_add_rx_frag(struct nbl_rx_buffer *rx_buffer, + struct sk_buff *skb, + unsigned int size) +{ + page_ref_inc(rx_buffer->di->page); + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->di->page, + rx_buffer->offset, size, rx_buffer->size); +} + +static __always_inline int nbl_rx_vlan_pop(struct nbl_res_rx_ring *rx_ring, + struct sk_buff *skb) +{ + struct vlan_ethhdr *veth = (struct vlan_ethhdr *)skb->data; + + if (!rx_ring->vlan_proto) + return 0; + + if (rx_ring->vlan_proto != ntohs(veth->h_vlan_proto) || + (rx_ring->vlan_tci & VLAN_VID_MASK) != + (ntohs(veth->h_vlan_TCI) & VLAN_VID_MASK)) + return 1; + + memmove(skb->data + VLAN_HLEN, skb->data, 2 * ETH_ALEN); + __skb_pull(skb, VLAN_HLEN); + + return 0; +} + +static void nbl_txrx_register_vsi_ring(void *priv, u16 vsi_index, + u16 ring_offset, u16 ring_num) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt); + + txrx_mgt->vsi_info[vsi_index].ring_offset = ring_offset; + txrx_mgt->vsi_info[vsi_index].ring_num = ring_num; +} + +static void nbl_res_txrx_cfg_txrx_vlan(void *priv, u16 vlan_tci, u16 vlan_proto, + u8 vsi_index) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt); + struct nbl_txrx_vsi_info *vsi_info = &txrx_mgt->vsi_info[vsi_index]; + struct nbl_res_tx_ring *tx_ring; + struct nbl_res_rx_ring *rx_ring; + u16 i; + + if (!txrx_mgt->tx_rings || !txrx_mgt->rx_rings) + return; + + for (i = vsi_info->ring_offset; + i < vsi_info->ring_offset + vsi_info->ring_num; i++) { + tx_ring = txrx_mgt->tx_rings[i]; + rx_ring = txrx_mgt->rx_rings[i]; + + if (tx_ring) { + tx_ring->vlan_tci = vlan_tci; + tx_ring->vlan_proto = vlan_proto; + } + + if (rx_ring) { + rx_ring->vlan_tci = vlan_tci; + rx_ring->vlan_proto = vlan_proto; + } + } +} + +/* + * Current version support merging multiple descriptor for one packet. + */ +static struct sk_buff *nbl_construct_skb(struct nbl_res_rx_ring *rx_ring, + struct napi_struct *napi, + struct nbl_rx_buffer *rx_buf, + unsigned int size) +{ + struct sk_buff *skb; + char *p, *buf; + int tailroom, + shinfo_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + unsigned int truesize = rx_buf->size; + unsigned int headlen; + + p = page_address(rx_buf->di->page) + rx_buf->offset; + buf = p - NBL_RX_PAD; + p += NBL_BUFFER_HDR_LEN; + tailroom = truesize - size - NBL_RX_PAD; + size -= NBL_BUFFER_HDR_LEN; + + if (rx_ring->linear_skb && tailroom >= shinfo_size) { + skb = build_skb(buf, truesize); + if (unlikely(!skb)) + return NULL; + + page_ref_inc(rx_buf->di->page); + skb_reserve(skb, p - buf); + skb_put(skb, size); + goto ok; + } + + skb = napi_alloc_skb(napi, NBL_RX_HDR_SIZE); + if (unlikely(!skb)) + return NULL; + + headlen = size; + if (headlen > NBL_RX_HDR_SIZE) + headlen = eth_get_headlen(skb->dev, p, NBL_RX_HDR_SIZE); + + memcpy(__skb_put(skb, headlen), p, ALIGN(headlen, sizeof(long))); + size -= headlen; + if (size) { + page_ref_inc(rx_buf->di->page); + skb_add_rx_frag(skb, 0, rx_buf->di->page, + rx_buf->offset + NBL_BUFFER_HDR_LEN + headlen, + size, truesize); + } +ok: + skb_record_rx_queue(skb, rx_ring->queue_index); + + return skb; +} + +static __always_inline struct nbl_rx_buffer * +nbl_get_rx_buf(struct nbl_res_rx_ring *rx_ring) +{ + struct nbl_rx_buffer *rx_buf; + + rx_buf = NBL_RX_BUF(rx_ring, rx_ring->next_to_clean); + prefetchw(rx_buf->di->page); + + dma_sync_single_range_for_cpu(rx_ring->dma_dev, rx_buf->di->addr, + rx_buf->offset, rx_ring->buf_len, + DMA_FROM_DEVICE); + + return rx_buf; +} + +static __always_inline void nbl_put_rx_buf(struct nbl_res_rx_ring *rx_ring, + struct nbl_rx_buffer *rx_buf) +{ + u16 ntc = rx_ring->next_to_clean + 1; + + /* if at the end of the ring, reset ntc and flip used wrap bit */ + if (unlikely(ntc >= rx_ring->desc_num)) { + ntc = 0; + rx_ring->used_wrap_counter ^= 1; + } + + rx_ring->next_to_clean = ntc; + prefetch(NBL_RX_DESC(rx_ring, ntc)); + + nbl_put_rx_frag(rx_ring, rx_buf, true); +} + +static __always_inline int nbl_maybe_stop_tx(struct nbl_res_tx_ring *tx_ring, + unsigned int size) +{ + if (likely(nbl_unused_tx_desc_count(tx_ring) >= size)) + return 0; + + if (!nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA], + tx_ring->queue_index)) + return -EBUSY; + + dev_dbg(NBL_RING_TO_DEV(tx_ring), + "unused_desc_count:%u, size:%u, stop queue %u\n", + nbl_unused_tx_desc_count(tx_ring), size, tx_ring->queue_index); + netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); + + /* smp_mb */ + smp_mb(); + + if (likely(nbl_unused_tx_desc_count(tx_ring) < size)) + return -EBUSY; + + dev_dbg(NBL_RING_TO_DEV(tx_ring), + "unused_desc_count:%u, size:%u, start queue %u\n", + nbl_unused_tx_desc_count(tx_ring), size, tx_ring->queue_index); + netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); + + return 0; +} + +static int nbl_res_txrx_clean_rx_irq(struct nbl_res_rx_ring *rx_ring, + struct napi_struct *napi, int budget) +{ + struct nbl_ring_desc *rx_desc; + struct nbl_rx_buffer *rx_buf; + struct nbl_rx_extend_head *hdr; + struct sk_buff *skb = NULL; + unsigned int total_rx_pkts = 0; + unsigned int total_rx_bytes = 0; + unsigned int size; + u32 rx_multicast_packets = 0; + u32 rx_unicast_packets = 0; + u16 desc_count = 0; + u16 num_buffers = 0; + u16 cleaned_count = nbl_unused_rx_desc_count(rx_ring); + bool failure = 0; + bool drop = 0; + u16 tmp; + + while (likely(total_rx_pkts < budget)) { + rx_desc = NBL_RX_DESC(rx_ring, rx_ring->next_to_clean); + if (!nbl_ring_desc_used(rx_desc, rx_ring->used_wrap_counter)) + break; + + dma_rmb(); + size = le32_to_cpu(rx_desc->len); + rx_buf = nbl_get_rx_buf(rx_ring); + + desc_count++; + + if (skb) { + nbl_add_rx_frag(rx_buf, skb, size); + } else { + hdr = page_address(rx_buf->di->page) + rx_buf->offset; + net_prefetch(hdr); + skb = nbl_construct_skb(rx_ring, napi, rx_buf, size); + if (unlikely(!skb)) { + rx_ring->rx_stats.rx_alloc_buf_err_cnt++; + break; + } + + num_buffers = (u16)hdr->num_buffers; + nbl_rx_csum(rx_ring, skb, hdr); + drop = nbl_rx_vlan_pop(rx_ring, skb); + } + + cleaned_count++; + nbl_put_rx_buf(rx_ring, rx_buf); + if (desc_count < num_buffers) + continue; + desc_count = 0; + + if (unlikely(eth_skb_pad(skb))) { + skb = NULL; + drop = 0; + continue; + } + + if (unlikely(drop)) { + kfree(skb); + skb = NULL; + drop = 0; + continue; + } + + total_rx_bytes += skb->len; + skb->protocol = eth_type_trans(skb, rx_ring->netdev); + if (unlikely(skb->pkt_type == PACKET_BROADCAST || + skb->pkt_type == PACKET_MULTICAST)) + rx_multicast_packets++; + else + rx_unicast_packets++; + + napi_gro_receive(napi, skb); + skb = NULL; + drop = 0; + total_rx_pkts++; + } + tmp = cleaned_count & (~(NBL_MAX_BATCH_DESC - 1)); + if (tmp) + failure = nbl_alloc_rx_bufs(rx_ring, tmp); + + u64_stats_update_begin(&rx_ring->syncp); + rx_ring->stats.packets += total_rx_pkts; + rx_ring->stats.bytes += total_rx_bytes; + rx_ring->rx_stats.rx_multicast_packets += rx_multicast_packets; + rx_ring->rx_stats.rx_unicast_packets += rx_unicast_packets; + u64_stats_update_end(&rx_ring->syncp); + + return failure ? budget : total_rx_pkts; +} + +static int nbl_res_napi_poll(struct napi_struct *napi, int budget) +{ + struct nbl_napi_struct *nbl_napi = + container_of(napi, struct nbl_napi_struct, napi); + struct nbl_res_vector *vector = + container_of(nbl_napi, struct nbl_res_vector, nbl_napi); + struct nbl_res_tx_ring *tx_ring; + struct nbl_res_rx_ring *rx_ring; + int complete = 1, cleaned = 0, tx_done = 1; + + tx_ring = vector->tx_ring; + rx_ring = vector->rx_ring; + + if (vector->started) { + tx_done = nbl_res_txrx_clean_tx_irq(tx_ring); + cleaned = nbl_res_txrx_clean_rx_irq(rx_ring, napi, budget); + } + complete = tx_done && (cleaned < budget); + if (!complete) + return budget; + + if (!napi_complete_done(napi, cleaned)) + return min_t(int, cleaned, budget - 1); + + /* unmask irq passthrough for performace */ + if (vector->net_msix_mask_en) + writel(vector->irq_data, + (void __iomem *)vector->irq_enable_base); + + return min_t(int, cleaned, budget - 1); +} + +static unsigned int nbl_xmit_desc_count(struct sk_buff *skb) +{ + unsigned int nr_frags = skb_shinfo(skb)->nr_frags; + + return nr_frags + 1; +} + +/* set up TSO(TCP Segmentation Offload) */ +static int nbl_tx_tso(struct nbl_tx_buffer *first, + struct nbl_tx_hdr_param *hdr_param) +{ + struct sk_buff *skb = first->skb; + union { + struct iphdr *v4; + struct ipv6hdr *v6; + unsigned char *hdr; + } ip; + union { + struct tcphdr *tcp; + struct udphdr *udp; + unsigned char *hdr; + } l4; + u8 l4_start; + u32 payload_len; + u8 header_len = 0; + int err; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + return 1; + + if (!skb_is_gso(skb)) + return 1; + + err = skb_cow_head(skb, 0); + if (err < 0) + return err; + + ip.hdr = skb_network_header(skb); + l4.hdr = skb_transport_header(skb); + + /* initialize IP header fields*/ + if (ip.v4->version == IP_VERSION_V4) { + ip.v4->tot_len = 0; + ip.v4->check = 0; + } else { + ip.v6->payload_len = 0; + } + + /* length of (MAC + IP) header */ + l4_start = (u8)(l4.hdr - skb->data); + + /* l4 packet length */ + payload_len = skb->len - l4_start; + + /* remove l4 packet length from L4 pseudo-header checksum */ + if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) { + csum_replace_by_diff(&l4.udp->check, + (__force __wsum)htonl(payload_len)); + /* compute length of UDP segmentation header */ + header_len = (u8)sizeof(l4.udp) + l4_start; + } else { + csum_replace_by_diff(&l4.tcp->check, + (__force __wsum)htonl(payload_len)); + /* compute length of TCP segmentation header */ + header_len = (u8)(l4.tcp->doff * 4 + l4_start); + } + + hdr_param->tso = 1; + hdr_param->mss = skb_shinfo(skb)->gso_size; + hdr_param->total_hlen = header_len; + + first->gso_segs = skb_shinfo(skb)->gso_segs; + first->bytecount += (first->gso_segs - 1) * header_len; + first->tx_flags = NBL_TX_FLAGS_TSO; + + return first->gso_segs; +} + +/* set up Tx checksum offload */ +static int nbl_tx_csum(struct nbl_tx_buffer *first, + struct nbl_tx_hdr_param *hdr_param) +{ + struct sk_buff *skb = first->skb; + union { + struct iphdr *v4; + struct ipv6hdr *v6; + unsigned char *hdr; + } ip; + union { + struct tcphdr *tcp; + struct udphdr *udp; + unsigned char *hdr; + } l4; + __be16 frag_off, protocol; + u8 inner_ip_type = 0, l4_type = 0, l4_csum = 0, l4_proto = 0; + u32 l2_len = 0, l3_len = 0, l4_len = 0; + unsigned char *exthdr; + int ret; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + return 0; + + ip.hdr = skb_network_header(skb); + l4.hdr = skb_transport_header(skb); + + /* compute outer L2 header size */ + l2_len = ip.hdr - skb->data; + + protocol = vlan_get_protocol(skb); + + if (protocol == htons(ETH_P_IP)) { + inner_ip_type = NBL_TX_IIPT_IPV4; + l4_proto = ip.v4->protocol; + } else if (protocol == htons(ETH_P_IPV6)) { + inner_ip_type = NBL_TX_IIPT_IPV6; + exthdr = ip.hdr + sizeof(*ip.v6); + l4_proto = ip.v6->nexthdr; + + if (l4.hdr != exthdr) { + ret = ipv6_skip_exthdr(skb, exthdr - skb->data, + &l4_proto, &frag_off); + if (ret < 0) + return -1; + } + } else { + return -1; + } + + l3_len = l4.hdr - ip.hdr; + + switch (l4_proto) { + case IPPROTO_TCP: + l4_type = NBL_TX_L4T_TCP; + l4_len = l4.tcp->doff; + l4_csum = 1; + break; + case IPPROTO_UDP: + l4_type = NBL_TX_L4T_UDP; + l4_len = (sizeof(struct udphdr) >> 2); + l4_csum = 1; + break; + case IPPROTO_SCTP: + if (first->tx_flags & NBL_TX_FLAGS_TSO) + return -1; + l4_type = NBL_TX_L4T_RSV; + l4_len = (sizeof(struct sctphdr) >> 2); + l4_csum = 1; + break; + default: + if (first->tx_flags & NBL_TX_FLAGS_TSO) + return -2; + + /* unsopported L4 protocol, device cannot offload L4 checksum, + * so software compute L4 checskum + */ + skb_checksum_help(skb); + return 0; + } + + hdr_param->mac_len = l2_len >> 1; + hdr_param->ip_len = l3_len >> 2; + hdr_param->l4_len = l4_len; + hdr_param->l4_type = l4_type; + hdr_param->inner_ip_type = inner_ip_type; + hdr_param->l3_csum_en = 0; + hdr_param->l4_csum_en = l4_csum; + + return 1; +} + +static __always_inline int nbl_tx_fill_desc(struct nbl_res_tx_ring *tx_ring, + u64 dma, u32 size, u16 index, + bool first, bool page) +{ + struct nbl_tx_buffer *tx_buffer = NBL_TX_BUF(tx_ring, index); + struct nbl_ring_desc *tx_desc = NBL_TX_DESC(tx_ring, index); + + tx_buffer->dma = dma; + tx_buffer->len = size; + tx_buffer->page = page; + tx_desc->addr = cpu_to_le64(dma); + tx_desc->len = cpu_to_le32(size); + if (!first) + tx_desc->flags = cpu_to_le16(tx_ring->avail_used_flags | + NBL_PACKED_DESC_F_NEXT); + + index++; + if (index == tx_ring->desc_num) { + index = 0; + tx_ring->avail_used_flags ^= 1 << NBL_PACKED_DESC_F_AVAIL | + 1 << NBL_PACKED_DESC_F_USED; + } + + return index; +} + +static int nbl_map_skb(struct nbl_res_tx_ring *tx_ring, struct sk_buff *skb, + u16 first, u16 *desc_index) +{ + u16 index = *desc_index; + const skb_frag_t *frag; + unsigned int frag_num = skb_shinfo(skb)->nr_frags; + struct device *dma_dev = NBL_RING_TO_DMA_DEV(tx_ring); + unsigned int i; + unsigned int size; + dma_addr_t dma; + + size = skb_headlen(skb); + dma = dma_map_single(dma_dev, skb->data, size, DMA_TO_DEVICE); + if (dma_mapping_error(dma_dev, dma)) + return -1; + + index = nbl_tx_fill_desc(tx_ring, dma, size, index, first, 0); + + if (!frag_num) { + *desc_index = index; + return 0; + } + + frag = &skb_shinfo(skb)->frags[0]; + for (i = 0; i < frag_num; i++) { + size = skb_frag_size(frag); + dma = skb_frag_dma_map(dma_dev, frag, 0, size, DMA_TO_DEVICE); + if (dma_mapping_error(dma_dev, dma)) { + *desc_index = index; + return -1; + } + + index = nbl_tx_fill_desc(tx_ring, dma, size, index, 0, 1); + frag++; + } + + *desc_index = index; + return 0; +} + +static __always_inline void +nbl_tx_fill_tx_extend_header_leonis(union nbl_tx_extend_head *pkthdr, + struct nbl_tx_hdr_param *param) +{ + pkthdr->mac_len = param->mac_len; + pkthdr->ip_len = param->ip_len; + pkthdr->l4_len = param->l4_len; + pkthdr->l4_type = param->l4_type; + pkthdr->inner_ip_type = param->inner_ip_type; + + pkthdr->l4s_sid = param->l4s_sid; + pkthdr->l4s_sync_ind = param->l4s_sync_ind; + pkthdr->l4s_hdl_ind = param->l4s_hdl_ind; + pkthdr->l4s_pbrac_mode = param->l4s_pbrac_mode; + + pkthdr->mss = param->mss; + pkthdr->tso = param->tso; + + pkthdr->fwd = param->fwd; + pkthdr->rss_lag_en = param->rss_lag_en; + pkthdr->dport = param->dport; + pkthdr->dport_id = param->dport_id; + + pkthdr->l3_csum_en = param->l3_csum_en; + pkthdr->l4_csum_en = param->l4_csum_en; +} + +static bool nbl_skb_is_lacp_or_lldp(struct sk_buff *skb) +{ + __be16 protocol; + + protocol = vlan_get_protocol(skb); + if (protocol == htons(ETH_P_SLOW) || protocol == htons(ETH_P_LLDP)) + return true; + + return false; +} + +static int nbl_tx_map(struct nbl_res_tx_ring *tx_ring, struct sk_buff *skb, + struct nbl_tx_hdr_param *hdr_param) +{ + struct device *dma_dev = NBL_RING_TO_DMA_DEV(tx_ring); + struct nbl_tx_buffer *first; + struct nbl_ring_desc *first_desc; + struct nbl_ring_desc *tx_desc; + union nbl_tx_extend_head *pkthdr; + dma_addr_t hdrdma; + int tso, csum; + u16 desc_index = tx_ring->next_to_use; + u16 tmp; + u16 head = desc_index; + u16 avail_used_flags = tx_ring->avail_used_flags; + u32 pkthdr_len, len; + bool can_push; + bool doorbell = true; + + first_desc = NBL_TX_DESC(tx_ring, desc_index); + first = NBL_TX_BUF(tx_ring, desc_index); + first->gso_segs = 1; + first->bytecount = skb->len; + first->tx_flags = 0; + first->skb = skb; + skb_tx_timestamp(skb); + + can_push = !skb_header_cloned(skb) && + skb_headroom(skb) >= sizeof(*pkthdr); + + if (can_push) + pkthdr = (union nbl_tx_extend_head *)(skb->data - + sizeof(*pkthdr)); + else + pkthdr = (union nbl_tx_extend_head *)(skb->cb); + + tso = nbl_tx_tso(first, hdr_param); + if (tso < 0) { + netdev_err(tx_ring->netdev, "tso ret:%d\n", tso); + goto out_drop; + } + + csum = nbl_tx_csum(first, hdr_param); + if (csum < 0) { + netdev_err(tx_ring->netdev, "csum ret:%d\n", csum); + goto out_drop; + } + + memset(pkthdr, 0, sizeof(*pkthdr)); + switch (tx_ring->product_type) { + case NBL_LEONIS_TYPE: + nbl_tx_fill_tx_extend_header_leonis(pkthdr, hdr_param); + break; + default: + netdev_err(tx_ring->netdev, + "fill tx extend header failed, product type: %d, eth: %u.\n", + tx_ring->product_type, hdr_param->dport_id); + goto out_drop; + } + + pkthdr_len = sizeof(union nbl_tx_extend_head); + + if (can_push) { + __skb_push(skb, pkthdr_len); + if (nbl_map_skb(tx_ring, skb, 1, &desc_index)) + goto dma_map_error; + __skb_pull(skb, pkthdr_len); + } else { + hdrdma = dma_map_single(dma_dev, pkthdr, pkthdr_len, + DMA_TO_DEVICE); + if (dma_mapping_error(dma_dev, hdrdma)) { + tx_ring->tx_stats.tx_dma_busy++; + return NETDEV_TX_BUSY; + } + + first_desc->addr = cpu_to_le64(hdrdma); + first_desc->len = cpu_to_le32(pkthdr_len); + + first->dma = hdrdma; + first->len = pkthdr_len; + + desc_index++; + if (desc_index == tx_ring->desc_num) { + desc_index = 0; + tx_ring->avail_used_flags ^= + 1 << NBL_PACKED_DESC_F_AVAIL | + 1 << NBL_PACKED_DESC_F_USED; + } + if (nbl_map_skb(tx_ring, skb, 0, &desc_index)) + goto dma_map_error; + } + + /* stats */ + if (is_multicast_ether_addr(skb->data)) + tx_ring->tx_stats.tx_multicast_packets += tso; + else + tx_ring->tx_stats.tx_unicast_packets += tso; + + if (tso > 1) { + tx_ring->tx_stats.tso_packets++; + tx_ring->tx_stats.tso_bytes += skb->len; + } + tx_ring->tx_stats.tx_csum_packets += csum; + tmp = (desc_index == 0 ? tx_ring->desc_num : desc_index) - 1; + tx_desc = NBL_TX_DESC(tx_ring, tmp); + tx_desc->flags &= cpu_to_le16(~NBL_PACKED_DESC_F_NEXT); + len = le32_to_cpu(first_desc->len); + len += (hdr_param->total_hlen << NBL_TX_TOTAL_HEADERLEN_SHIFT); + first_desc->len = cpu_to_le32(len); + first_desc->id = cpu_to_le16(skb_shinfo(skb)->gso_size); + + tx_ring->next_to_use = desc_index; + nbl_maybe_stop_tx(tx_ring, DESC_NEEDED); + if (nbl_txrx_within_vsi(&tx_ring->vsi_info[NBL_VSI_DATA], + tx_ring->queue_index)) + doorbell = __netdev_tx_sent_queue(txring_txq(tx_ring), + first->bytecount, + netdev_xmit_more()); + /* wmb */ + wmb(); + + first->next_to_watch = tx_desc; + /* first desc last set flag */ + if (first_desc == tx_desc) + first_desc->flags = cpu_to_le16(avail_used_flags); + else + first_desc->flags = + cpu_to_le16(avail_used_flags | NBL_PACKED_DESC_F_NEXT); + + /* kick doorbell passthrough for performace */ + if (doorbell) + writel(tx_ring->notify_qid, tx_ring->notify_addr); + + return NETDEV_TX_OK; + +dma_map_error: + while (desc_index != head) { + if (unlikely(!desc_index)) + desc_index = tx_ring->desc_num; + desc_index--; + nbl_unmap_and_free_tx_resource(tx_ring, + NBL_TX_BUF(tx_ring, desc_index), + false, false); + } + + tx_ring->avail_used_flags = avail_used_flags; + tx_ring->tx_stats.tx_dma_busy++; + return NETDEV_TX_BUSY; + +out_drop: + netdev_err(tx_ring->netdev, "tx_map, free_skb\n"); + tx_ring->tx_stats.tx_skb_free++; + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; +} + +static netdev_tx_t nbl_res_txrx_start_xmit(struct sk_buff *skb, + struct net_device *netdev) +{ + struct nbl_resource_mgt *res_mgt = + NBL_ADAP_TO_RES_MGT(NBL_NETDEV_TO_ADAPTER(netdev)); + struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt); + struct nbl_res_tx_ring *tx_ring = + txrx_mgt->tx_rings[skb_get_queue_mapping(skb)]; + struct nbl_tx_hdr_param hdr_param = { + .mac_len = 14 >> 1, + .ip_len = 20 >> 2, + .l4_len = 20 >> 2, + .mss = 256, + }; + u16 vlan_tci; + __be16 vlan_proto; + unsigned int count; + int ret = 0; + + count = nbl_xmit_desc_count(skb); + /* we can not tranmit a packet with more than 32 descriptors */ + WARN_ON(count > MAX_DESC_NUM_PER_PKT); + if (unlikely(nbl_maybe_stop_tx(tx_ring, count))) { + if (net_ratelimit()) + dev_dbg(NBL_RING_TO_DEV(tx_ring), + "no desc to tx pkt in queue %u\n", + tx_ring->queue_index); + tx_ring->tx_stats.tx_busy++; + return NETDEV_TX_BUSY; + } + + if (tx_ring->vlan_proto || skb_vlan_tag_present(skb)) { + if (tx_ring->vlan_proto) { + vlan_proto = htons(tx_ring->vlan_proto); + vlan_tci = tx_ring->vlan_tci; + } + + if (skb_vlan_tag_present(skb)) { + vlan_proto = skb->vlan_proto; + vlan_tci = skb_vlan_tag_get(skb); + } + + skb = vlan_insert_tag_set_proto(skb, vlan_proto, vlan_tci); + if (!skb) + return NETDEV_TX_OK; + } + /* for dstore and eth, min packet len is 60 */ + eth_skb_pad(skb); + + hdr_param.dport_id = tx_ring->eth_id; + hdr_param.fwd = 1; + hdr_param.rss_lag_en = 0; + + if (nbl_skb_is_lacp_or_lldp(skb)) { + hdr_param.fwd = NBL_TX_FWD_TYPE_CPU_ASSIGNED; + hdr_param.dport = NBL_TX_DPORT_ETH; + } + + /* for unicast packet tx_map all */ + ret = nbl_tx_map(tx_ring, skb, &hdr_param); + return ret; +} + +static void nbl_res_txrx_kick_rx_ring(void *priv, u16 index) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_hw_ops *hw_ops = NBL_RES_MGT_TO_HW_OPS(res_mgt); + struct nbl_notify_param notify_param = { 0 }; + struct nbl_res_rx_ring *rx_ring = + NBL_RES_MGT_TO_RX_RING(res_mgt, index); + + notify_param.notify_qid = rx_ring->notify_qid; + notify_param.tail_ptr = rx_ring->tail_ptr; + hw_ops->update_tail_ptr(NBL_RES_MGT_TO_HW_PRIV(res_mgt), ¬ify_param); +} + +static struct nbl_napi_struct *nbl_res_txrx_get_vector_napi(void *priv, + u16 index) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + + if (!txrx_mgt->vectors || index >= txrx_mgt->rx_ring_num) { + nbl_err(common, "vectors not allocated\n"); + return NULL; + } + + return &txrx_mgt->vectors[index]->nbl_napi; +} + +static void nbl_res_txrx_set_vector_info(void *priv, + u8 __iomem *irq_enable_base, + u32 irq_data, u16 index, bool mask_en) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt); + struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt; + + if (!txrx_mgt->vectors || index >= txrx_mgt->rx_ring_num) { + nbl_err(common, "vectors not allocated\n"); + return; + } + + txrx_mgt->vectors[index]->irq_enable_base = irq_enable_base; + txrx_mgt->vectors[index]->irq_data = irq_data; + txrx_mgt->vectors[index]->net_msix_mask_en = mask_en; +} + +static void nbl_res_get_pt_ops(void *priv, struct nbl_resource_pt_ops *pt_ops) +{ + pt_ops->start_xmit = nbl_res_txrx_start_xmit; + pt_ops->napi_poll = nbl_res_napi_poll; +} + +static u32 nbl_res_txrx_get_tx_headroom(void *priv) +{ + return sizeof(union nbl_tx_extend_head); +} + +static bool nbl_res_is_ctrlq(struct nbl_txrx_mgt *txrx_mgt, u16 qid) +{ + u16 ring_num = txrx_mgt->vsi_info[NBL_VSI_CTRL].ring_num; + u16 ring_offset = txrx_mgt->vsi_info[NBL_VSI_CTRL].ring_offset; + + if (qid >= ring_offset && qid < ring_offset + ring_num) + return true; + + return false; +} + +static void nbl_res_txrx_get_net_stats(void *priv, struct nbl_stats *net_stats) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt); + struct nbl_res_rx_ring *rx_ring; + struct nbl_res_tx_ring *tx_ring; + int i; + u64 bytes = 0, packets = 0; + u64 tso_packets = 0, tso_bytes = 0; + u64 tx_csum_packets = 0; + u64 rx_csum_packets = 0, rx_csum_errors = 0; + u64 tx_multicast_packets = 0, tx_unicast_packets = 0; + u64 rx_multicast_packets = 0, rx_unicast_packets = 0; + u64 tx_busy = 0, tx_dma_busy = 0; + u64 tx_desc_addr_err_cnt = 0; + u64 tx_desc_len_err_cnt = 0; + u64 rx_desc_addr_err_cnt = 0; + u64 rx_alloc_buf_err_cnt = 0; + u64 rx_cache_reuse = 0; + u64 rx_cache_full = 0; + u64 rx_cache_empty = 0; + u64 rx_cache_busy = 0; + u64 rx_cache_waive = 0; + u64 tx_skb_free = 0; + unsigned int start; + + rcu_read_lock(); + for (i = 0; i < txrx_mgt->rx_ring_num; i++) { + if (nbl_res_is_ctrlq(txrx_mgt, i)) + continue; + + rx_ring = NBL_RES_MGT_TO_RX_RING(res_mgt, i); + do { + start = u64_stats_fetch_begin(&rx_ring->syncp); + bytes += rx_ring->stats.bytes; + packets += rx_ring->stats.packets; + rx_csum_packets += rx_ring->rx_stats.rx_csum_packets; + rx_csum_errors += rx_ring->rx_stats.rx_csum_errors; + rx_multicast_packets += + rx_ring->rx_stats.rx_multicast_packets; + rx_unicast_packets += + rx_ring->rx_stats.rx_unicast_packets; + rx_desc_addr_err_cnt += + rx_ring->rx_stats.rx_desc_addr_err_cnt; + rx_alloc_buf_err_cnt += + rx_ring->rx_stats.rx_alloc_buf_err_cnt; + rx_cache_reuse += rx_ring->rx_stats.rx_cache_reuse; + rx_cache_full += rx_ring->rx_stats.rx_cache_full; + rx_cache_empty += rx_ring->rx_stats.rx_cache_empty; + rx_cache_busy += rx_ring->rx_stats.rx_cache_busy; + rx_cache_waive += rx_ring->rx_stats.rx_cache_waive; + } while (u64_stats_fetch_retry(&rx_ring->syncp, start)); + } + + net_stats->rx_packets = packets; + net_stats->rx_bytes = bytes; + + net_stats->rx_csum_packets = rx_csum_packets; + net_stats->rx_csum_errors = rx_csum_errors; + net_stats->rx_multicast_packets = rx_multicast_packets; + net_stats->rx_unicast_packets = rx_unicast_packets; + + bytes = 0; + packets = 0; + + for (i = 0; i < txrx_mgt->tx_ring_num; i++) { + if (nbl_res_is_ctrlq(txrx_mgt, i)) + continue; + + tx_ring = NBL_RES_MGT_TO_TX_RING(res_mgt, i); + do { + start = u64_stats_fetch_begin(&tx_ring->syncp); + bytes += tx_ring->stats.bytes; + packets += tx_ring->stats.packets; + tso_packets += tx_ring->tx_stats.tso_packets; + tso_bytes += tx_ring->tx_stats.tso_bytes; + tx_csum_packets += tx_ring->tx_stats.tx_csum_packets; + tx_busy += tx_ring->tx_stats.tx_busy; + tx_dma_busy += tx_ring->tx_stats.tx_dma_busy; + tx_multicast_packets += + tx_ring->tx_stats.tx_multicast_packets; + tx_unicast_packets += + tx_ring->tx_stats.tx_unicast_packets; + tx_skb_free += tx_ring->tx_stats.tx_skb_free; + tx_desc_addr_err_cnt += + tx_ring->tx_stats.tx_desc_addr_err_cnt; + tx_desc_len_err_cnt += + tx_ring->tx_stats.tx_desc_len_err_cnt; + } while (u64_stats_fetch_retry(&tx_ring->syncp, start)); + } + + rcu_read_unlock(); + + net_stats->tx_bytes = bytes; + net_stats->tx_packets = packets; + net_stats->tso_packets = tso_packets; + net_stats->tso_bytes = tso_bytes; + net_stats->tx_csum_packets = tx_csum_packets; + net_stats->tx_busy = tx_busy; + net_stats->tx_dma_busy = tx_dma_busy; + net_stats->tx_multicast_packets = tx_multicast_packets; + net_stats->tx_unicast_packets = tx_unicast_packets; + net_stats->tx_skb_free = tx_skb_free; + net_stats->tx_desc_addr_err_cnt = tx_desc_addr_err_cnt; + net_stats->tx_desc_len_err_cnt = tx_desc_len_err_cnt; + net_stats->rx_desc_addr_err_cnt = rx_desc_addr_err_cnt; + net_stats->rx_alloc_buf_err_cnt = rx_alloc_buf_err_cnt; + net_stats->rx_cache_reuse = rx_cache_reuse; + net_stats->rx_cache_full = rx_cache_full; + net_stats->rx_cache_empty = rx_cache_empty; + net_stats->rx_cache_busy = rx_cache_busy; + net_stats->rx_cache_waive = rx_cache_waive; +} + +static int nbl_res_queue_stop_abnormal_sw_queue(void *priv, u16 local_queue_id, + int type) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_res_vector *vector = NULL; + struct nbl_res_tx_ring *tx_ring = + NBL_RES_MGT_TO_TX_RING(res_mgt, local_queue_id); + + if (!tx_ring) + return -EINVAL; + if (type != NBL_TX) + return 0; + if (tx_ring) + vector = NBL_RES_MGT_TO_VECTOR(res_mgt, local_queue_id); + + if (!tx_ring->valid) + return -EINVAL; + + if (vector && !vector->started) + return -EINVAL; + + if (vector) { + vector->started = false; + napi_synchronize(&vector->nbl_napi.napi); + netif_stop_subqueue(tx_ring->netdev, local_queue_id); + } + + return 0; +} + +static dma_addr_t nbl_res_txrx_restore_abnormal_ring(void *priv, int ring_index, + int type) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_res_tx_ring *tx_ring = + NBL_RES_MGT_TO_TX_RING(res_mgt, ring_index); + struct nbl_res_rx_ring *rx_ring = + NBL_RES_MGT_TO_RX_RING(res_mgt, ring_index); + + switch (type) { + case NBL_TX: + if (tx_ring && tx_ring->valid) { + nbl_res_txrx_stop_tx_ring(res_mgt, ring_index); + return nbl_res_txrx_start_tx_ring(res_mgt, ring_index); + } else { + return (dma_addr_t)NULL; + } + break; + case NBL_RX: + if (rx_ring && rx_ring->valid) { + nbl_res_txrx_stop_rx_ring(res_mgt, ring_index); + return nbl_res_txrx_start_rx_ring(res_mgt, ring_index, + true); + } else { + return (dma_addr_t)NULL; + } + break; + default: + break; + } + + return (dma_addr_t)NULL; +} + +static int nbl_res_txrx_restart_abnormal_ring(void *priv, int ring_index, + int type) +{ + struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv; + struct nbl_res_tx_ring *tx_ring = + NBL_RES_MGT_TO_TX_RING(res_mgt, ring_index); + struct nbl_res_rx_ring *rx_ring = + NBL_RES_MGT_TO_RX_RING(res_mgt, ring_index); + struct nbl_res_vector *vector = NULL; + int ret = 0; + + if (tx_ring) + vector = NBL_RES_MGT_TO_VECTOR(res_mgt, ring_index); + + switch (type) { + case NBL_TX: + if (tx_ring && tx_ring->valid) { + writel(tx_ring->notify_qid, tx_ring->notify_addr); + netif_start_subqueue(tx_ring->netdev, ring_index); + } else { + ret = -EINVAL; + } + break; + case NBL_RX: + if (rx_ring && rx_ring->valid) + nbl_res_txrx_kick_rx_ring(res_mgt, ring_index); + else + ret = -EINVAL; + break; + default: + break; + } + + if (vector) { + if (vector->net_msix_mask_en) + writel(vector->irq_data, + (void __iomem *)vector->irq_enable_base); + vector->started = true; + } + + return ret; +} + +static int nbl_res_get_max_mtu(void *priv) +{ + return NBL_MAX_JUMBO_FRAME_SIZE - NBL_PKT_HDR_PAD; +} + +/* NBL_TXRX_SET_OPS(ops_name, func) + * + * Use X Macros to reduce setup and remove codes. + */ +#define NBL_TXRX_OPS_TBL \ +do { \ + NBL_TXRX_SET_OPS(get_resource_pt_ops, nbl_res_get_pt_ops); \ + NBL_TXRX_SET_OPS(alloc_rings, nbl_res_txrx_alloc_rings); \ + NBL_TXRX_SET_OPS(remove_rings, nbl_res_txrx_remove_rings); \ + NBL_TXRX_SET_OPS(start_tx_ring, nbl_res_txrx_start_tx_ring); \ + NBL_TXRX_SET_OPS(stop_tx_ring, nbl_res_txrx_stop_tx_ring); \ + NBL_TXRX_SET_OPS(start_rx_ring, nbl_res_txrx_start_rx_ring); \ + NBL_TXRX_SET_OPS(stop_rx_ring, nbl_res_txrx_stop_rx_ring); \ + NBL_TXRX_SET_OPS(kick_rx_ring, nbl_res_txrx_kick_rx_ring); \ + NBL_TXRX_SET_OPS(get_vector_napi, \ + nbl_res_txrx_get_vector_napi); \ + NBL_TXRX_SET_OPS(set_vector_info, \ + nbl_res_txrx_set_vector_info); \ + NBL_TXRX_SET_OPS(get_tx_headroom, \ + nbl_res_txrx_get_tx_headroom); \ + NBL_TXRX_SET_OPS(get_net_stats, nbl_res_txrx_get_net_stats); \ + NBL_TXRX_SET_OPS(stop_abnormal_sw_queue, \ + nbl_res_queue_stop_abnormal_sw_queue); \ + NBL_TXRX_SET_OPS(restore_abnormal_ring, \ + nbl_res_txrx_restore_abnormal_ring); \ + NBL_TXRX_SET_OPS(restart_abnormal_ring, \ + nbl_res_txrx_restart_abnormal_ring); \ + NBL_TXRX_SET_OPS(register_vsi_ring, \ + nbl_txrx_register_vsi_ring); \ + NBL_TXRX_SET_OPS(cfg_txrx_vlan, nbl_res_txrx_cfg_txrx_vlan); \ + NBL_TXRX_SET_OPS(get_max_mtu, nbl_res_get_max_mtu); \ +} while (0) + +/* Structure starts here, adding an op should not modify anything below */ +static int nbl_txrx_setup_mgt(struct device *dev, + struct nbl_txrx_mgt **txrx_mgt) +{ + *txrx_mgt = devm_kzalloc(dev, sizeof(struct nbl_txrx_mgt), GFP_KERNEL); + if (!*txrx_mgt) + return -ENOMEM; + + return 0; +} + +static void nbl_txrx_remove_mgt(struct device *dev, + struct nbl_txrx_mgt **txrx_mgt) +{ + devm_kfree(dev, *txrx_mgt); + *txrx_mgt = NULL; +} + +int nbl_txrx_mgt_start(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_txrx_mgt **txrx_mgt; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + txrx_mgt = &NBL_RES_MGT_TO_TXRX_MGT(res_mgt); + + return nbl_txrx_setup_mgt(dev, txrx_mgt); +} + +void nbl_txrx_mgt_stop(struct nbl_resource_mgt *res_mgt) +{ + struct device *dev; + struct nbl_txrx_mgt **txrx_mgt; + + dev = NBL_RES_MGT_TO_DEV(res_mgt); + txrx_mgt = &NBL_RES_MGT_TO_TXRX_MGT(res_mgt); + + if (!(*txrx_mgt)) + return; + + nbl_txrx_remove_mgt(dev, txrx_mgt); +} + +int nbl_txrx_setup_ops(struct nbl_resource_ops *res_ops) +{ +#define NBL_TXRX_SET_OPS(name, func) \ + do { \ + res_ops->NBL_NAME(name) = func; \ + ; \ + } while (0) + NBL_TXRX_OPS_TBL; +#undef NBL_TXRX_SET_OPS + + return 0; +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h new file mode 100644 index 000000000000..de11f30a8210 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_hw/nbl_txrx.h @@ -0,0 +1,184 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_TXRX_H_ +#define _NBL_TXRX_H_ + +#include "nbl_resource.h" + +#define NBL_RING_TO_COMMON(ring) ((ring)->common) +#define NBL_RING_TO_DEV(ring) ((ring)->dma_dev) +#define NBL_RING_TO_DMA_DEV(ring) ((ring)->dma_dev) + +#define NBL_MIN_DESC_NUM 128 +#define NBL_MAX_DESC_NUM 32768 + +#define NBL_PACKED_DESC_F_NEXT 1 +#define NBL_PACKED_DESC_F_WRITE 2 +#define NBL_PACKED_DESC_F_AVAIL 7 +#define NBL_PACKED_DESC_F_USED 15 + +#define NBL_TX_DESC(tx_ring, i) (&(((tx_ring)->desc)[i])) +#define NBL_RX_DESC(rx_ring, i) (&(((rx_ring)->desc)[i])) +#define NBL_TX_BUF(tx_ring, i) (&(((tx_ring)->tx_bufs)[i])) +#define NBL_RX_BUF(rx_ring, i) (&(((rx_ring)->rx_bufs)[i])) + +#define NBL_RX_BUF_256 256 +#define NBL_RX_HDR_SIZE NBL_RX_BUF_256 +#define NBL_BUFFER_HDR_LEN (sizeof(struct nbl_rx_extend_head)) +#define NBL_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) +#define NBL_RX_BUFSZ 2048 +#define NBL_RX_DMA_ATTR (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) + +#define NBL_TX_TOTAL_HEADERLEN_SHIFT 24 +#define DESC_NEEDED (MAX_SKB_FRAGS + 4) +#define NBL_TX_POLL_WEIGHT 256 +#define NBL_TXD_DATALEN_BITS 16 +#define NBL_TXD_DATALEN_MAX BIT(NBL_TXD_DATALEN_BITS) +#define MAX_DESC_NUM_PER_PKT (32) + +#define IP_VERSION_V4 (4) +#define NBL_TX_FLAGS_TSO BIT(0) + +/* TX inner IP header type */ +enum nbl_tx_iipt { + NBL_TX_IIPT_NONE = 0x0, + NBL_TX_IIPT_IPV6 = 0x1, + NBL_TX_IIPT_IPV4 = 0x2, + NBL_TX_IIPT_RSV = 0x3 +}; + +/* TX L4 packet type */ +enum nbl_tx_l4t { + NBL_TX_L4T_NONE = 0x0, + NBL_TX_L4T_TCP = 0x1, + NBL_TX_L4T_UDP = 0x2, + NBL_TX_L4T_RSV = 0x3 +}; + +struct nbl_tx_hdr_param { + u8 l4s_pbrac_mode; + u8 l4s_hdl_ind; + u8 l4s_sync_ind; + u8 tso; + u16 l4s_sid; + u16 mss; + u8 mac_len; + u8 ip_len; + u8 l4_len; + u8 l4_type; + u8 inner_ip_type; + u8 l3_csum_en; + u8 l4_csum_en; + u16 total_hlen; + u16 dport_id:10; + u16 fwd:2; + u16 dport:3; + u16 rss_lag_en:1; +}; + +union nbl_tx_extend_head { + struct { + /* DW0 */ + u32 mac_len :5; + u32 ip_len :5; + u32 l4_len :4; + u32 l4_type :2; + u32 inner_ip_type :2; + u32 external_ip_type :2; + u32 external_ip_len :5; + u32 l4_tunnel_type :2; + u32 l4_tunnel_len :5; + /* DW1 */ + u32 l4s_sid :10; + u32 l4s_sync_ind :1; + u32 l4s_redun_ind :1; + u32 l4s_redun_head_ind :1; + u32 l4s_hdl_ind :1; + u32 l4s_pbrac_mode :1; + u32 rsv0 :2; + u32 mss :14; + u32 tso :1; + /* DW2 */ + /* if dport = NBL_TX_DPORT_ETH; dport_info = 0 + * if dport = NBL_TX_DPORT_HOST; dport_info = host queue id + * if dport = NBL_TX_DPORT_ECPU; dport_info = ecpu queue_id + */ + u32 dport_info :11; + /* if dport = NBL_TX_DPORT_ETH; dport_id[3:0] = eth port id, + * dport_id[9:4] = lag id + * if dport = NBL_TX_DPORT_HOST; dport_id[9:0] = host vsi_id + * if dport = NBL_TX_DPORT_ECPU; dport_id[9:0] = ecpu vsi_id + */ + u32 dport_id :10; +#define NBL_TX_DPORT_ID_LAG_OFFSET (4) + u32 dport :3; +#define NBL_TX_DPORT_ETH (0) +#define NBL_TX_DPORT_HOST (1) +#define NBL_TX_DPORT_ECPU (2) +#define NBL_TX_DPORT_EMP (3) +#define NBL_TX_DPORT_BMC (4) + u32 fwd :2; +#define NBL_TX_FWD_TYPE_DROP (0) +#define NBL_TX_FWD_TYPE_NORMAL (1) +#define NBL_TX_FWD_TYPE_RSV (2) +#define NBL_TX_FWD_TYPE_CPU_ASSIGNED (3) + u32 rss_lag_en :1; + u32 l4_csum_en :1; + u32 l3_csum_en :1; + u32 rsv1 :3; + }; + u32 dw[3]; +}; + +struct nbl_rx_extend_head { + /* DW0 */ + /* 0x0:eth, 0x1:host, 0x2:ecpu, 0x3:emp, 0x4:bcm */ + uint32_t sport :3; + uint32_t dport_info :11; + /* sport = 0, sport_id[3:0] = eth id, + * sport = 1, sport_id[9:0] = host vsi_id, + * sport = 2, sport_id[9:0] = ecpu vsi_id, + */ + uint32_t sport_id :10; + /* 0x0:drop, 0x1:normal, 0x2:cpu upcall */ + uint32_t fwd :2; + uint32_t rsv0 :6; + /* DW1 */ + uint32_t error_code :6; + uint32_t ptype :10; + uint32_t profile_id :4; + uint32_t checksum_status :1; + uint32_t rsv1 :1; + uint32_t l4s_sid :10; + /* DW2 */ + uint32_t rsv3 :2; + uint32_t l4s_hdl_ind :1; + uint32_t l4s_tcp_offset :14; + uint32_t l4s_resync_ind :1; + uint32_t l4s_check_ind :1; + uint32_t l4s_dec_ind :1; + uint32_t rsv2 :4; + uint32_t num_buffers :8; +} __packed; + +static inline u16 nbl_unused_rx_desc_count(struct nbl_res_rx_ring *ring) +{ + u16 ntc = ring->next_to_clean; + u16 ntu = ring->next_to_use; + + return ((ntc > ntu) ? 0 : ring->desc_num) + ntc - ntu - 1; +} + +static inline u16 nbl_unused_tx_desc_count(struct nbl_res_tx_ring *ring) +{ + u16 ntc = ring->next_to_clean; + u16 ntu = ring->next_to_use; + + return ((ntc > ntu) ? 0 : ring->desc_num) + ntc - ntu - 1; +} + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h index e2c5a865892f..246ef618e651 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_hw.h @@ -105,6 +105,9 @@ struct nbl_hw_ops { void (*update_adminq_queue_tail_ptr)(void *priv, u16 tail_ptr, u8 txrx); bool (*check_adminq_dma_err)(void *priv, bool tx); + void (*update_tail_ptr)(void *priv, struct nbl_notify_param *param); + u8 __iomem *(*get_tail_ptr)(void *priv); + int (*set_vsi_mtu)(void *priv, u16 vsi_id, u16 mtu_sel); u8 __iomem *(*get_hw_addr)(void *priv, size_t *size); @@ -127,6 +130,7 @@ struct nbl_hw_ops { u16 next_mcc_id); void (*update_mcc_next_node)(void *priv, u16 mcc_id, u16 next_mcc_id); int (*init_fem)(void *priv); + void (*set_fw_ping)(void *priv, u32 ping); u32 (*get_fw_pong)(void *priv); void (*set_fw_pong)(void *priv, u32 pong); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index 934612c12fc1..173ff2ebef81 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -227,6 +227,11 @@ struct nbl_hw_stats { struct nbl_ustore_stats start_ustore_stats; }; +struct nbl_notify_param { + u16 notify_qid; + u16 tail_ptr; +}; + enum nbl_port_type { NBL_PORT_TYPE_UNKNOWN = 0, NBL_PORT_TYPE_FIBRE, -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 11/15] net/nebula-matrix: add Dispatch layer definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (9 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 10/15] net/nebula-matrix: add txrx " illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 12/15] net/nebula-matrix: add Service " illusion.wang ` (4 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list Two routing ways: Dispatch Layer-> Resource Layer -> HW layer The Dispatch Layer routes tasks to Resource Layer, which may interact with the HW Layer for hardware writes. Dispatch Layer->Channel Layer The Dispatch Layers redirects hooks to the Channel Layer. The primary challenge at the Dispatch layer lies in determining the routing approach, namely, how to decide which interfaces should directly invoke the Resource layer's interfaces and which should transmit requests via channels to the management PF for processing. To address this, a ctrl_lvl (control level) mechanism is established, which comprises two parts: the control level declared by each interface and the control level configured by the upper layer. The effect is that when the upper layer configures a specific control level, all interfaces declaring this level will directly call the Resource layer's interfaces; otherwise, they will send requests via channels. For instance, consider a regular PF that possesses network (net) capabilities but lacks control (ctrl) capabilities. It will only configure NET_LVL at the Dispatch layer. In this scenario, all interfaces declaring NET_LVL will directly invoke the Resource layer's interfaces, while those declaring CTRL_LVL will send requests via channels to the management PF. Conversely, if it is the management PF, it will configure both NET_LVL and CTRL_LVL at the Dispatch layer. Consequently, interfaces declaring CTRL_LVL will also directly call the Resource layer's interfaces without sending requests via channels. This configuration logic can be dynamic. Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 1 + .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 4 + .../nebula-matrix/nbl/nbl_core/nbl_dispatch.c | 4265 +++++++++++++++++ .../nebula-matrix/nbl/nbl_core/nbl_dispatch.h | 78 + .../nbl/nbl_include/nbl_def_dispatch.h | 190 + .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 7 + 6 files changed, 4545 insertions(+) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index 7e2aebdad098..dba7bf27be46 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -17,6 +17,7 @@ nbl_core-objs += nbl_common/nbl_common.o \ nbl_hw/nbl_queue.o \ nbl_hw/nbl_vsi.o \ nbl_hw/nbl_adminq.o \ + nbl_core/nbl_dispatch.o \ nbl_main.o # Provide include files diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index eef0e76fb9db..d32a8c4a7519 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -12,6 +12,7 @@ #include "nbl_def_channel.h" #include "nbl_def_hw.h" #include "nbl_def_resource.h" +#include "nbl_def_dispatch.h" #include "nbl_def_common.h" #define NBL_ADAP_TO_PDEV(adapter) ((adapter)->pdev) @@ -21,9 +22,11 @@ #define NBL_ADAP_TO_HW_MGT(adapter) ((adapter)->core.hw_mgt) #define NBL_ADAP_TO_RES_MGT(adapter) ((adapter)->core.res_mgt) +#define NBL_ADAP_TO_DISP_MGT(adapter) ((adapter)->core.disp_mgt) #define NBL_ADAP_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt) #define NBL_ADAP_TO_HW_OPS_TBL(adapter) ((adapter)->intf.hw_ops_tbl) #define NBL_ADAP_TO_RES_OPS_TBL(adapter) ((adapter)->intf.resource_ops_tbl) +#define NBL_ADAP_TO_DISP_OPS_TBL(adapter) ((adapter)->intf.dispatch_ops_tbl) #define NBL_ADAP_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl) #define NBL_ADAPTER_TO_RES_PT_OPS(adapter) \ @@ -67,6 +70,7 @@ enum { struct nbl_interface { struct nbl_hw_ops_tbl *hw_ops_tbl; struct nbl_resource_ops_tbl *resource_ops_tbl; + struct nbl_dispatch_ops_tbl *dispatch_ops_tbl; struct nbl_channel_ops_tbl *channel_ops_tbl; }; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c new file mode 100644 index 000000000000..fe8554b0ac16 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.c @@ -0,0 +1,4265 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +#include <linux/etherdevice.h> +#include "nbl_dispatch.h" + +static int nbl_disp_chan_add_macvlan_req(void *priv, u8 *mac, u16 vlan, u16 vsi) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_chan_param_add_macvlan param; + struct nbl_chan_send_info chan_send; + struct nbl_channel_ops *chan_ops; + struct nbl_common_info *common; + + if (!disp_mgt || !mac) + return -EINVAL; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + memcpy(param.mac, mac, sizeof(param.mac)); + param.vlan = vlan; + param.vsi = vsi; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_ADD_MACVLAN, + ¶m, sizeof(param), NULL, 0, 1); + + if (chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send)) + return -EFAULT; + + return 0; +} + +static void nbl_disp_chan_add_macvlan_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + struct nbl_chan_param_add_macvlan *param; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + param = (struct nbl_chan_param_add_macvlan *)data; + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->add_macvlan, p, + param->mac, param->vlan, param->vsi); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_ADD_MACVLAN, msg_id, err, + NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_ADD_MACVLAN); +} + +static void nbl_disp_chan_del_macvlan_req(void *priv, u8 *mac, u16 vlan, + u16 vsi) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_del_macvlan param; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + if (!disp_mgt || !mac) + return; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + memcpy(param.mac, mac, sizeof(param.mac)); + param.vlan = vlan; + param.vsi = vsi; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_DEL_MACVLAN, + ¶m, sizeof(param), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_del_macvlan_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_chan_param_del_macvlan *param; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + param = (struct nbl_chan_param_del_macvlan *)data; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->del_macvlan, p, param->mac, + param->vlan, param->vsi); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_DEL_MACVLAN, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_add_multi_rule_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + if (!disp_mgt) + return -EINVAL; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_ADD_MULTI_RULE, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_add_multi_rule_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + u8 broadcast_mac[ETH_ALEN]; + int err = NBL_CHAN_RESP_OK; + int ret; + u16 vsi_id; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + vsi_id = *(u16 *)data; + memset(broadcast_mac, 0xFF, ETH_ALEN); + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->add_macvlan, p, + broadcast_mac, 0, vsi_id); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_ADD_MULTI_RULE, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_del_multi_rule_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + if (!disp_mgt) + return; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_DEL_MULTI_RULE, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_del_multi_rule_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + u8 broadcast_mac[ETH_ALEN]; + int err = NBL_CHAN_RESP_OK; + u16 vsi_id; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + vsi_id = *(u16 *)data; + memset(broadcast_mac, 0xFF, ETH_ALEN); + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->del_macvlan, p, broadcast_mac, 0, + vsi_id); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_DEL_MULTI_RULE, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_cfg_multi_mcast(void *priv, u16 vsi, u16 enable) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + if (enable) + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->add_multi_mcast, + p, vsi); + else + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->del_multi_mcast, p, vsi); + return ret; +} + +static int nbl_disp_chan_cfg_multi_mcast_req(void *priv, u16 vsi_id, u16 enable) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + struct nbl_chan_param_cfg_multi_mcast mcast; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + mcast.vsi = vsi_id; + mcast.enable = enable; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, + NBL_CHAN_MSG_CFG_MULTI_MCAST_RULE, &mcast, sizeof(mcast), + NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_cfg_multi_mcast_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_cfg_multi_mcast *mcast; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + mcast = (struct nbl_chan_param_cfg_multi_mcast *)data; + + if (mcast->enable) + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->add_multi_mcast, + p, mcast->vsi); + else + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->del_multi_mcast, p, + mcast->vsi); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CFG_MULTI_MCAST_RULE, + msg_id, err, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_setup_multi_group_req(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_MULTI_GROUP, + NULL, 0, NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_setup_multi_group_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_multi_group, p); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_MULTI_GROUP, msg_id, + err, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_remove_multi_group_req(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, + NBL_CHAN_MSG_REMOVE_MULTI_GROUP, NULL, 0, NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_remove_multi_group_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_multi_group, p); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_MULTI_GROUP, msg_id, + err, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int +nbl_disp_chan_register_net_req(void *priv, + struct nbl_register_net_param *register_param, + struct nbl_register_net_result *register_result) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_register_net_info param = {0}; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + int ret; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.pf_bar_start = register_param->pf_bar_start; + param.pf_bdf = register_param->pf_bdf; + param.vf_bar_start = register_param->vf_bar_start; + param.vf_bar_size = register_param->vf_bar_size; + param.total_vfs = register_param->total_vfs; + param.offset = register_param->offset; + param.stride = register_param->stride; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REGISTER_NET, + ¶m, sizeof(param), (void *)register_result, + sizeof(*register_result), 1); + + ret = chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); + return ret; +} + +static void nbl_disp_chan_register_net_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_register_net_info param; + struct nbl_register_net_result result = { 0 }; + struct nbl_register_net_param register_param = { 0 }; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int copy_len; + int err = NBL_CHAN_RESP_OK; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + memset(¶m, 0, sizeof(struct nbl_chan_param_register_net_info)); + copy_len = data_len < sizeof(struct nbl_chan_param_register_net_info) ? + data_len : + sizeof(struct nbl_chan_param_register_net_info); + memcpy(¶m, data, copy_len); + + register_param.pf_bar_start = param.pf_bar_start; + register_param.pf_bdf = param.pf_bdf; + register_param.vf_bar_start = param.vf_bar_start; + register_param.vf_bar_size = param.vf_bar_size; + register_param.total_vfs = param.total_vfs; + register_param.offset = param.offset; + register_param.stride = param.stride; + register_param.is_vdpa = param.is_vdpa; + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->register_net, p, src_id, + ®ister_param, &result); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REGISTER_NET, msg_id, err, + &result, sizeof(result)); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d, src_id:%d\n", + ret, NBL_CHAN_MSG_REGISTER_NET, src_id); +} + +static int nbl_disp_unregister_net(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->unregister_net, p, 0); +} + +static int nbl_disp_chan_unregister_net_req(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_UNREGISTER_NET, + NULL, 0, NULL, 0, 1); + + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_unregister_net_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->unregister_net, p, + src_id); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_UNREGISTER_NET, msg_id, err, + NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d, src_id:%d\n", + ret, NBL_CHAN_MSG_UNREGISTER_NET, src_id); +} + +static int nbl_disp_chan_alloc_txrx_queues_req(void *priv, u16 vsi_id, + u16 queue_num) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_alloc_txrx_queues param = { 0 }; + struct nbl_chan_param_alloc_txrx_queues result = { 0 }; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.vsi_id = vsi_id; + param.queue_num = queue_num; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_ALLOC_TXRX_QUEUES, + ¶m, sizeof(param), &result, sizeof(result), 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_alloc_txrx_queues_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_chan_param_alloc_txrx_queues *param; + struct nbl_chan_param_alloc_txrx_queues result = {0}; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + param = (struct nbl_chan_param_alloc_txrx_queues *)data; + result.queue_num = param->queue_num; + + err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->alloc_txrx_queues, p, + param->vsi_id, param->queue_num); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_ALLOC_TXRX_QUEUES, msg_id, + err, &result, sizeof(result)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_free_txrx_queues_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_FREE_TXRX_QUEUES, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_free_txrx_queues_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u16 vsi_id; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + vsi_id = *(u16 *)data; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->free_txrx_queues, p, vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_FREE_TXRX_QUEUES, msg_id, + err, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_register_vsi2q_req(void *priv, u16 vsi_index, + u16 vsi_id, u16 queue_offset, + u16 queue_num) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_param_register_vsi2q param = {0}; + struct nbl_chan_send_info chan_send; + + param.vsi_index = vsi_index; + param.vsi_id = vsi_id; + param.queue_offset = queue_offset; + param.queue_num = queue_num; + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REGISTER_VSI2Q, + ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_register_vsi2q_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_param_register_vsi2q *param = NULL; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + + param = (struct nbl_chan_param_register_vsi2q *)data; + + err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->register_vsi2q, p, + param->vsi_index, param->vsi_id, + param->queue_offset, param->queue_num); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REGISTER_VSI2Q, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_setup_q2vsi_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_Q2VSI, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_setup_q2vsi_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u16 vsi_id; + + vsi_id = *(u16 *)data; + + err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_q2vsi, p, vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_Q2VSI, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_remove_q2vsi_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REMOVE_Q2VSI, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_remove_q2vsi_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u16 vsi_id; + + vsi_id = *(u16 *)data; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_q2vsi, p, vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_Q2VSI, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_setup_rss_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_RSS, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_setup_rss_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_ack_info chan_ack; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + int err = NBL_CHAN_RESP_OK; + u16 vsi_id; + + vsi_id = *(u16 *)data; + err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_rss, p, vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_RSS, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_remove_rss_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REMOVE_RSS, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_remove_rss_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u16 vsi_id; + + vsi_id = *(u16 *)data; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_rss, p, vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_RSS, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_setup_queue_req(void *priv, + struct nbl_txrx_queue_param *_param, + bool is_tx) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_setup_queue param; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + memcpy(¶m.queue_param, _param, sizeof(param.queue_param)); + param.is_tx = is_tx; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_QUEUE, + ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_setup_queue_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_setup_queue *param; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + param = (struct nbl_chan_param_setup_queue *)data; + + err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_queue, p, + ¶m->queue_param, param->is_tx); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_QUEUE, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_remove_all_queues_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REMOVE_ALL_QUEUES, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_remove_all_queues_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u16 vsi_id; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + vsi_id = *(u16 *)data; + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_all_queues, p, vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_ALL_QUEUES, msg_id, + err, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_cfg_dsch_req(void *priv, u16 vsi_id, bool vld) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_param_cfg_dsch param = { 0 }; + struct nbl_chan_send_info chan_send; + + param.vsi_id = vsi_id; + param.vld = vld; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_CFG_DSCH, ¶m, + sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_cfg_dsch_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_cfg_dsch *param; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + param = (struct nbl_chan_param_cfg_dsch *)data; + + err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->cfg_dsch, p, + param->vsi_id, param->vld); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CFG_DSCH, msg_id, err, NULL, + 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_setup_cqs(void *priv, u16 vsi_id, u16 real_qps, + bool rss_indir_set) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_cqs, p, vsi_id, + real_qps, rss_indir_set); + return ret; +} + +static int nbl_disp_chan_setup_cqs_req(void *priv, u16 vsi_id, u16 real_qps, + bool rss_indir_set) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_setup_cqs param = { 0 }; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.vsi_id = vsi_id; + param.real_qps = real_qps; + param.rss_indir_set = rss_indir_set; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SETUP_CQS, ¶m, + sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_setup_cqs_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_setup_cqs param; + struct nbl_chan_ack_info chan_ack; + int copy_len; + int err = NBL_CHAN_RESP_OK; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + memset(¶m, 0, sizeof(struct nbl_chan_param_setup_cqs)); + param.rss_indir_set = true; + copy_len = data_len < sizeof(struct nbl_chan_param_setup_cqs) ? + data_len : + sizeof(struct nbl_chan_param_setup_cqs); + memcpy(¶m, data, copy_len); + + err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_cqs, p, + param.vsi_id, param.real_qps, + param.rss_indir_set); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SETUP_CQS, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_remove_cqs_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_REMOVE_CQS, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_remove_cqs_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u16 vsi_id; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + vsi_id = *(u16 *)data; + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_cqs, p, vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REMOVE_CQS, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_set_promisc_mode(void *priv, u16 vsi_id, u16 mode) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->set_promisc_mode, p, + vsi_id, mode); + return ret; +} + +static int nbl_disp_chan_set_promisc_mode_req(void *priv, u16 vsi_id, u16 mode) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_param_set_promisc_mode param = {0}; + struct nbl_chan_send_info chan_send = {0}; + + param.vsi_id = vsi_id; + param.mode = mode; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SET_PROSISC_MODE, + ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_set_promisc_mode_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_param_set_promisc_mode *param = NULL; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + + param = (struct nbl_chan_param_set_promisc_mode *)data; + err = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->set_promisc_mode, p, + param->vsi_id, param->mode); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SET_PROSISC_MODE, msg_id, + err, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_get_rxfh_indir_size_req(void *priv, u16 vsi_id, + u32 *rxfh_indir_size) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send = {0}; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, + NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE, &vsi_id, sizeof(vsi_id), + rxfh_indir_size, sizeof(u32), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_get_rxfh_indir_size_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + u32 rxfh_indir_size = 0; + int ret = NBL_CHAN_RESP_OK; + u16 vsi_id; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + vsi_id = *(u16 *)data; + NBL_OPS_CALL(res_ops->get_rxfh_indir_size, + (p, vsi_id, &rxfh_indir_size)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE, msg_id, + ret, &rxfh_indir_size, sizeof(u32)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_set_sfp_state_req(void *priv, u8 eth_id, u8 state) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_chan_param_set_sfp_state param = {0}; + struct nbl_chan_send_info chan_send = {0}; + struct nbl_channel_ops *chan_ops; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.eth_id = eth_id; + param.state = state; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SET_SFP_STATE, + ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_set_sfp_state_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_param_set_sfp_state *param; + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + param = (struct nbl_chan_param_set_sfp_state *)data; + + ret = NBL_OPS_CALL_RET(res_ops->set_sfp_state, + (p, param->eth_id, param->state)); + if (ret) { + err = NBL_CHAN_RESP_ERR; + dev_err(dev, "set sfp state failed with ret: %d\n", ret); + } + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SET_SFP_STATE, msg_id, err, + NULL, 0); + + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d, src_id: %d\n", + ret, NBL_CHAN_MSG_SET_SFP_STATE, src_id); +} + +static u16 nbl_disp_chan_get_function_id_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = {0}; + struct nbl_common_info *common; + u16 func_id = 0; + + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_FUNCTION_ID, + &vsi_id, sizeof(vsi_id), &func_id, sizeof(func_id), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return func_id; +} + +static void nbl_disp_chan_get_function_id_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int ret = NBL_CHAN_RESP_OK; + u16 vsi_id, func_id; + + vsi_id = *(u16 *)data; + + func_id = NBL_OPS_CALL_RET(res_ops->get_function_id, (p, vsi_id)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_FUNCTION_ID, msg_id, + ret, &func_id, sizeof(func_id)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_get_real_bdf_req(void *priv, u16 vsi_id, u8 *bus, + u8 *dev, u8 *function) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_result_get_real_bdf result = { 0 }; + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_common_info *common; + + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_REAL_BDF, + &vsi_id, sizeof(vsi_id), &result, sizeof(result), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + *bus = result.bus; + *dev = result.dev; + *function = result.function; +} + +static void nbl_disp_chan_get_real_bdf_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_result_get_real_bdf result = { 0 }; + struct nbl_chan_ack_info chan_ack; + int ret = NBL_CHAN_RESP_OK; + u16 vsi_id; + + vsi_id = *(u16 *)data; + NBL_OPS_CALL(res_ops->get_real_bdf, + (p, vsi_id, &result.bus, &result.dev, &result.function)); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_REAL_BDF, msg_id, ret, + &result, sizeof(result)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_chan_get_mbx_irq_num_req(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_common_info *common; + int result = 0; + + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_MBX_IRQ_NUM, + NULL, 0, &result, sizeof(result), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return result; +} + +static void nbl_disp_chan_get_mbx_irq_num_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int result, ret = NBL_CHAN_RESP_OK; + + result = NBL_OPS_CALL_RET(res_ops->get_mbx_irq_num, (p)); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_MBX_IRQ_NUM, msg_id, + ret, &result, sizeof(result)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_clear_flow_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_common_info *common; + + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_CLEAR_FLOW, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_clear_flow_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + u16 *vsi_id = (u16 *)data; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->clear_flow, p, *vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CLEAR_FLOW, msg_id, + NBL_CHAN_RESP_OK, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_clear_queues_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_common_info *common; + + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_CLEAR_QUEUE, + &vsi_id, sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_clear_queues_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + u16 *vsi_id = (u16 *)data; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->clear_queues, p, *vsi_id); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CLEAR_QUEUE, msg_id, + NBL_CHAN_RESP_OK, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static u16 nbl_disp_chan_get_vsi_id_req(void *priv, u16 func_id, u16 type) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_param_get_vsi_id param = {0}; + struct nbl_chan_param_get_vsi_id result = {0}; + struct nbl_chan_send_info chan_send; + + param.type = type; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_VSI_ID, + ¶m, sizeof(param), &result, sizeof(result), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return result.vsi_id; +} + +static void nbl_disp_chan_get_vsi_id_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_param_get_vsi_id *param; + struct nbl_chan_param_get_vsi_id result; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret = 0; + + param = (struct nbl_chan_param_get_vsi_id *)data; + + result.vsi_id = + NBL_OPS_CALL_RET(res_ops->get_vsi_id, (p, src_id, param->type)); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_VSI_ID, msg_id, err, + &result, sizeof(result)); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_GET_VSI_ID); +} + +static void nbl_disp_chan_get_eth_id_req(void *priv, u16 vsi_id, u8 *eth_mode, + u8 *eth_id, u8 *logic_eth_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_param_get_eth_id param = { 0 }; + struct nbl_chan_param_get_eth_id result = { 0 }; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.vsi_id = vsi_id; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_ETH_ID, + ¶m, sizeof(param), &result, sizeof(result), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + *eth_mode = result.eth_mode; + *eth_id = result.eth_id; + *logic_eth_id = result.logic_eth_id; +} + +static void nbl_disp_chan_get_eth_id_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_param_get_eth_id *param; + struct nbl_chan_param_get_eth_id result = { 0 }; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret = 0; + + param = (struct nbl_chan_param_get_eth_id *)data; + + NBL_OPS_CALL(res_ops->get_eth_id, + (p, param->vsi_id, &result.eth_mode, &result.eth_id, + &result.logic_eth_id)); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_ETH_ID, msg_id, err, + &result, sizeof(result)); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_GET_ETH_ID); +} + +static int nbl_disp_alloc_rings(void *priv, struct net_device *netdev, + struct nbl_ring_param *ring_param) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->alloc_rings, (p, netdev, ring_param)); + return ret; +} + +static void nbl_disp_remove_rings(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops; + + if (!disp_mgt) + return; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->remove_rings, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static dma_addr_t nbl_disp_start_tx_ring(void *priv, u8 ring_index) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + dma_addr_t addr = 0; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + addr = NBL_OPS_CALL_RET(res_ops->start_tx_ring, (p, ring_index)); + return addr; +} + +static void nbl_disp_stop_tx_ring(void *priv, u8 ring_index) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + if (!disp_mgt) + return; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->stop_tx_ring, (p, ring_index)); +} + +static dma_addr_t nbl_disp_start_rx_ring(void *priv, u8 ring_index, + bool use_napi) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + dma_addr_t addr = 0; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + addr = NBL_OPS_CALL_RET(res_ops->start_rx_ring, + (p, ring_index, use_napi)); + + return addr; +} + +static void nbl_disp_stop_rx_ring(void *priv, u8 ring_index) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + if (!disp_mgt) + return; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->stop_rx_ring, (p, ring_index)); +} + +static void nbl_disp_kick_rx_ring(void *priv, u16 index) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->kick_rx_ring, (p, index)); +} + +static struct nbl_napi_struct *nbl_disp_get_vector_napi(void *priv, u16 index) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + return NBL_OPS_CALL_RET_PTR(res_ops->get_vector_napi, (p, index)); +} + +static void nbl_disp_set_vector_info(void *priv, u8 __iomem *irq_enable_base, + u32 irq_data, u16 index, bool mask_en) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->set_vector_info, + (p, irq_enable_base, irq_data, index, mask_en)); +} + +static void nbl_disp_register_vsi_ring(void *priv, u16 vsi_index, + u16 ring_offset, u16 ring_num) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL(res_ops->register_vsi_ring, + (p, vsi_index, ring_offset, ring_num)); +} + +static void nbl_disp_get_res_pt_ops(void *priv, + struct nbl_resource_pt_ops *pt_ops) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->get_resource_pt_ops, (p, pt_ops)); +} + +static int +nbl_disp_register_net(void *priv, struct nbl_register_net_param *register_param, + struct nbl_register_net_result *register_result) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->register_net, p, 0, + register_param, register_result); + return ret; +} + +static int nbl_disp_alloc_txrx_queues(void *priv, u16 vsi_id, u16 queue_num) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->alloc_txrx_queues, p, + vsi_id, queue_num); + return ret; +} + +static void nbl_disp_free_txrx_queues(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->free_txrx_queues, p, vsi_id); +} + +static int nbl_disp_register_vsi2q(void *priv, u16 vsi_index, u16 vsi_id, + u16 queue_offset, u16 queue_num) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->register_vsi2q, p, + vsi_index, vsi_id, queue_offset, + queue_num); +} + +static int nbl_disp_setup_q2vsi(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_q2vsi, p, vsi_id); +} + +static void nbl_disp_remove_q2vsi(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_q2vsi, p, vsi_id); +} + +static int nbl_disp_setup_rss(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_rss, p, vsi_id); +} + +static void nbl_disp_remove_rss(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_rss, p, vsi_id); +} + +static int nbl_disp_setup_queue(void *priv, struct nbl_txrx_queue_param *param, + bool is_tx) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_queue, p, param, + is_tx); + return ret; +} + +static void nbl_disp_remove_all_queues(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_all_queues, p, vsi_id); +} + +static int nbl_disp_cfg_dsch(void *priv, u16 vsi_id, bool vld) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->cfg_dsch, p, vsi_id, + vld); + return ret; +} + +static void nbl_disp_remove_cqs(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_cqs, p, vsi_id); +} + +static u8 __iomem * +nbl_disp_get_msix_irq_enable_info(void *priv, u16 global_vec_id, u32 *irq_data) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + if (!disp_mgt) + return NULL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + return NBL_OPS_CALL_RET_PTR(res_ops->get_msix_irq_enable_info, + (p, global_vec_id, irq_data)); +} + +static int nbl_disp_add_macvlan(void *priv, u8 *mac, u16 vlan, u16 vsi) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + if (!disp_mgt || !mac) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->add_macvlan, p, mac, + vlan, vsi); + return ret; +} + +static void nbl_disp_del_macvlan(void *priv, u8 *mac, u16 vlan, u16 vsi) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + if (!disp_mgt || !mac) + return; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->del_macvlan, p, mac, vlan, vsi); +} + +static int nbl_disp_add_multi_rule(void *priv, u16 vsi) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + u8 broadcast_mac[ETH_ALEN]; + int ret; + + memset(broadcast_mac, 0xFF, ETH_ALEN); + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->add_macvlan, p, + broadcast_mac, 0, vsi); + + return ret; +} + +static void nbl_disp_del_multi_rule(void *priv, u16 vsi) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + u8 broadcast_mac[ETH_ALEN]; + + memset(broadcast_mac, 0xFF, ETH_ALEN); + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->del_macvlan, p, broadcast_mac, 0, + vsi); +} + +static int nbl_disp_setup_multi_group(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->setup_multi_group, + NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)); +} + +static void nbl_disp_remove_multi_group(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->remove_multi_group, + NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)); +} + +static void nbl_disp_get_net_stats(void *priv, struct nbl_stats *net_stats) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->get_net_stats, (p, net_stats)); +} + +static void nbl_disp_cfg_txrx_vlan(void *priv, u16 vlan_tci, u16 vlan_proto, + u8 vsi_index) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL(res_ops->cfg_txrx_vlan, + (p, vlan_tci, vlan_proto, vsi_index)); +} + +static void nbl_disp_get_rxfh_indir_size(void *priv, u16 vsi_id, + u32 *rxfh_indir_size) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->get_rxfh_indir_size, + (p, vsi_id, rxfh_indir_size)); +} + +static int nbl_disp_set_rxfh_indir(void *priv, u16 vsi_id, const u32 *indir, + u32 indir_size) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->set_rxfh_indir, + (p, vsi_id, indir, indir_size)); + return ret; +} + +static int nbl_disp_chan_set_rxfh_indir_req(void *priv, u16 vsi_id, + const u32 *indir, u32 indir_size) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_chan_param_set_rxfh_indir *param = NULL; + struct nbl_chan_send_info chan_send = {0}; + struct nbl_channel_ops *chan_ops; + struct nbl_common_info *common; + int ret = 0; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param = kzalloc(sizeof(*param), GFP_KERNEL); + if (!param) + return -ENOMEM; + + param->vsi_id = vsi_id; + param->indir_size = indir_size; + memcpy(param->indir, indir, indir_size * sizeof(param->indir[0])); + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_SET_RXFH_INDIR, param, sizeof(*param), NULL, + 0, 1); + ret = chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); + kfree(param); + return ret; +} + +static void nbl_disp_chan_set_rxfh_indir_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_set_rxfh_indir *param; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + param = (struct nbl_chan_param_set_rxfh_indir *)data; + + err = NBL_OPS_CALL_RET(res_ops->set_rxfh_indir, + (p, param->vsi_id, param->indir, + param->indir_size)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SET_RXFH_INDIR, msg_id, err, + NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_set_sfp_state(void *priv, u8 eth_id, u8 state) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->set_sfp_state, (p, eth_id, state)); + return ret; +} + +static void nbl_disp_deinit_chip_module(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->deinit_chip_module, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static int nbl_disp_init_chip_module(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops; + int ret; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->init_chip_module, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); + return ret; +} + +static int nbl_disp_queue_init(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops; + int ret; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->queue_init, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); + return ret; +} + +static int nbl_disp_vsi_init(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops; + int ret; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->vsi_init, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); + return ret; +} + +static int nbl_disp_init_vf_msix_map(void *priv, u16 func_id, bool enable) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->init_vf_msix_map, p, + func_id, enable); + return ret; +} + +static int nbl_disp_chan_init_vf_msix_map_req(void *priv, u16 func_id, + bool enable) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_init_vf_msix_map param = {0}; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.func_id = func_id; + param.enable = enable; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_INIT_VF_MSIX_MAP, + ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_init_vf_msix_map_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_init_vf_msix_map *param; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + param = (struct nbl_chan_param_init_vf_msix_map *)data; + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->init_vf_msix_map, p, + param->func_id, param->enable); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_INIT_VF_MSIX_MAP, msg_id, + err, NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_INIT_VF_MSIX_MAP); +} + +static int nbl_disp_configure_msix_map(void *priv, u16 num_net_msix, + u16 num_others_msix, + bool net_msix_mask_en) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->configure_msix_map, p, 0, + num_net_msix, num_others_msix, + net_msix_mask_en); + return ret; +} + +static int nbl_disp_chan_configure_msix_map_req(void *priv, u16 num_net_msix, + u16 num_others_msix, + bool net_msix_mask_en) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_chan_param_cfg_msix_map param = {0}; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + if (!disp_mgt) + return -EINVAL; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.num_net_msix = num_net_msix; + param.num_others_msix = num_others_msix; + param.msix_mask_en = net_msix_mask_en; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, + NBL_CHAN_MSG_CONFIGURE_MSIX_MAP, ¶m, sizeof(param), + NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_configure_msix_map_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_cfg_msix_map *param; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + param = (struct nbl_chan_param_cfg_msix_map *)data; + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->configure_msix_map, p, + src_id, param->num_net_msix, + param->num_others_msix, + param->msix_mask_en); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CONFIGURE_MSIX_MAP, msg_id, + err, NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_CONFIGURE_MSIX_MAP); +} + +static int nbl_disp_chan_destroy_msix_map_req(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + if (!disp_mgt) + return -EINVAL; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_DESTROY_MSIX_MAP, + NULL, 0, NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_destroy_msix_map_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->destroy_msix_map, p, + src_id); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_DESTROY_MSIX_MAP, msg_id, + err, NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_DESTROY_MSIX_MAP); +} + +static int nbl_disp_chan_enable_mailbox_irq_req(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_enable_mailbox_irq param = { 0 }; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + if (!disp_mgt) + return -EINVAL; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.vector_id = vector_id; + param.enable_msix = enable_msix; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, + NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ, ¶m, sizeof(param), + NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_enable_mailbox_irq_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_enable_mailbox_irq *param; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + param = (struct nbl_chan_param_enable_mailbox_irq *)data; + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->enable_mailbox_irq, p, + src_id, param->vector_id, + param->enable_msix); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ, msg_id, + err, NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ); +} + +static u16 nbl_disp_chan_get_global_vector_req(void *priv, u16 vsi_id, + u16 local_vec_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_get_global_vector param = { 0 }; + struct nbl_chan_param_get_global_vector result = { 0 }; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + if (!disp_mgt) + return -EINVAL; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.vsi_id = vsi_id; + param.vector_id = local_vec_id; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_GLOBAL_VECTOR, + ¶m, sizeof(param), &result, sizeof(result), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return result.vector_id; +} + +static void nbl_disp_chan_get_global_vector_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_get_global_vector *param; + struct nbl_chan_param_get_global_vector result; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + param = (struct nbl_chan_param_get_global_vector *)data; + + result.vector_id = + NBL_OPS_CALL_RET(res_ops->get_global_vector, + (p, param->vsi_id, param->vector_id)); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_GLOBAL_VECTOR, msg_id, + err, &result, sizeof(result)); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_GET_GLOBAL_VECTOR); +} + +static int nbl_disp_destroy_msix_map(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->destroy_msix_map, p, 0); + return ret; +} + +static int nbl_disp_enable_mailbox_irq(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->enable_mailbox_irq, p, 0, + vector_id, enable_msix); + return ret; +} + +static int nbl_disp_enable_abnormal_irq(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->enable_abnormal_irq, + (p, vector_id, enable_msix)); + return ret; +} + +static int nbl_disp_enable_adminq_irq(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->enable_adminq_irq, + (p, vector_id, enable_msix)); + return ret; +} + +static u16 nbl_disp_get_global_vector(void *priv, u16 vsi_id, u16 local_vec_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + u16 ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->get_global_vector, + (p, vsi_id, local_vec_id)); + return ret; +} + +static u16 nbl_disp_get_msix_entry_id(void *priv, u16 vsi_id, u16 local_vec_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + u16 ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->get_msix_entry_id, + (p, vsi_id, local_vec_id)); + return ret; +} + +static u16 nbl_disp_get_vsi_id(void *priv, u16 func_id, u16 type) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + if (!disp_mgt) + return -EINVAL; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + return NBL_OPS_CALL_RET(res_ops->get_vsi_id, (p, func_id, type)); +} + +static void nbl_disp_get_eth_id(void *priv, u16 vsi_id, u8 *eth_mode, + u8 *eth_id, u8 *logic_eth_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL(res_ops->get_eth_id, + (p, vsi_id, eth_mode, eth_id, logic_eth_id)); +} + +static int nbl_disp_chan_add_lldp_flow_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send; + + NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_ADD_LLDP_FLOW, &vsi_id, + sizeof(vsi_id), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_add_lldp_flow_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->add_lldp_flow, p, + *(u16 *)data); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_ADD_LLDP_FLOW, msg_id, err, + NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_ADD_LLDP_FLOW); +} + +static int nbl_disp_add_lldp_flow(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->add_lldp_flow, p, + vsi_id); +} + +static void nbl_disp_chan_del_lldp_flow_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send; + + NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_DEL_LLDP_FLOW, &vsi_id, + sizeof(vsi_id), NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_del_lldp_flow_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->del_lldp_flow, p, *(u16 *)data); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_DEL_LLDP_FLOW, msg_id, err, + NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_DEL_LLDP_FLOW); +} + +static void nbl_disp_del_lldp_flow(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->del_lldp_flow, p, vsi_id); +} + +static u32 nbl_disp_get_tx_headroom(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + u32 ret; + + ret = NBL_OPS_CALL_RET(res_ops->get_tx_headroom, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); + return ret; +} + +static u8 __iomem *nbl_disp_get_hw_addr(void *priv, size_t *size) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + u8 __iomem *addr = NULL; + + addr = NBL_OPS_CALL_RET_PTR(res_ops->get_hw_addr, (p, size)); + return addr; +} + +static u16 nbl_disp_get_function_id(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + u16 ret; + + ret = NBL_OPS_CALL_RET(res_ops->get_function_id, (p, vsi_id)); + return ret; +} + +static void nbl_disp_get_real_bdf(void *priv, u16 vsi_id, u8 *bus, u8 *dev, + u8 *function) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL(res_ops->get_real_bdf, (p, vsi_id, bus, dev, function)); +} + +static bool nbl_disp_check_fw_heartbeat(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->check_fw_heartbeat, (p)); + return ret; +} + +static bool nbl_disp_check_fw_reset(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + return NBL_OPS_CALL_RET(res_ops->check_fw_reset, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static bool nbl_disp_get_product_fix_cap(void *priv, + enum nbl_fix_cap_type cap_type) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + bool has_cap = false; + + has_cap = NBL_OPS_CALL_RET(res_ops->get_product_fix_cap, (p, cap_type)); + return has_cap; +} + +static int nbl_disp_get_mbx_irq_num(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_mbx_irq_num, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static int nbl_disp_get_adminq_irq_num(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_adminq_irq_num, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static int nbl_disp_get_abnormal_irq_num(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_abnormal_irq_num, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static void nbl_disp_clear_flow(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->clear_flow, p, vsi_id); +} + +static void nbl_disp_clear_queues(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->clear_queues, p, vsi_id); +} + +static u16 nbl_disp_get_vsi_global_qid(void *priv, u16 vsi_id, u16 local_qid) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_vsi_global_queue_id, + (p, vsi_id, local_qid)); +} + +static u16 nbl_disp_chan_get_vsi_global_qid_req(void *priv, u16 vsi_id, + u16 local_qid) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_vsi_qid_info param = { 0 }; + struct nbl_chan_send_info chan_send; + + param.vsi_id = vsi_id; + param.local_qid = local_qid; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_GET_VSI_GLOBAL_QUEUE_ID, ¶m, + sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_get_vsi_global_qid_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_vsi_qid_info *param; + struct nbl_chan_ack_info chan_ack; + u16 global_qid; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + param = (struct nbl_chan_vsi_qid_info *)data; + global_qid = NBL_OPS_CALL_RET(res_ops->get_vsi_global_queue_id, + (p, param->vsi_id, param->local_qid)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_VSI_GLOBAL_QUEUE_ID, + msg_id, global_qid, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_get_port_attributes(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + int ret; + + ret = NBL_OPS_CALL_RET(res_ops->get_port_attributes, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); + if (ret) + dev_err(dev, "get port attributes failed with ret: %d\n", ret); + + return ret; +} + +static int nbl_disp_update_ring_num(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->update_ring_num, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static int nbl_disp_set_ring_num(void *priv, + struct nbl_cmd_net_ring_num *param) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->set_ring_num, (p, param)); +} + +static int nbl_disp_get_part_number(void *priv, char *part_number) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_part_number, (p, part_number)); +} + +static int nbl_disp_get_serial_number(void *priv, char *serial_number) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_serial_number, (p, serial_number)); +} + +static int nbl_disp_enable_port(void *priv, bool enable) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + int ret; + + ret = NBL_OPS_CALL_RET(res_ops->enable_port, (p, enable)); + if (ret) + dev_err(dev, "enable port failed with ret: %d\n", ret); + + return ret; +} + +static void nbl_disp_chan_recv_port_notify_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + NBL_OPS_CALL(res_ops->recv_port_notify, (p, data)); +} + +static int nbl_disp_get_link_state(void *priv, u8 eth_id, + struct nbl_eth_link_info *eth_link_info) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret = 0; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + /* if donot have res_ops->get_link_state(), default eth is up */ + if (res_ops->get_link_state) + ret = res_ops->get_link_state(p, eth_id, eth_link_info); + else + eth_link_info->link_status = 1; + + return ret; +} + +static int +nbl_disp_chan_get_link_state_req(void *priv, u8 eth_id, + struct nbl_eth_link_info *eth_link_info) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_GET_LINK_STATE, + ð_id, sizeof(eth_id), eth_link_info, + sizeof(*eth_link_info), 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_get_link_state_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u8 eth_id; + struct nbl_eth_link_info eth_link_info = { 0 }; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + eth_id = *(u8 *)data; + ret = res_ops->get_link_state(p, eth_id, ð_link_info); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_LINK_STATE, msg_id, err, + ð_link_info, sizeof(eth_link_info)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_set_wol(void *priv, u8 eth_id, bool enable) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->set_wol, (p, eth_id, enable)); +} + +static int nbl_disp_chan_set_wol_req(void *priv, u8 eth_id, bool enable) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_send_info chan_send; + struct nbl_chan_param_set_wol param = { 0 }; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.eth_id = eth_id; + param.enable = enable; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_SET_WOL, ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_set_wol_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_ack_info chan_ack; + struct nbl_chan_param_set_wol *param; + int err = NBL_CHAN_RESP_OK; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + + param = (struct nbl_chan_param_set_wol *)data; + ret = res_ops->set_wol(p, param->eth_id, param->enable); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SET_WOL, msg_id, err, NULL, + 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_set_eth_mac_addr(void *priv, u8 *mac, u8 eth_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->set_eth_mac_addr, (p, mac, eth_id)); +} + +static int nbl_disp_chan_set_eth_mac_addr_req(void *priv, u8 *mac, u8 eth_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_set_eth_mac_addr param; + struct nbl_chan_send_info chan_send; + struct nbl_common_info *common; + + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + memcpy(param.mac, mac, sizeof(param.mac)); + param.eth_id = eth_id; + + NBL_CHAN_SEND(chan_send, common->mgt_pf, NBL_CHAN_MSG_SET_ETH_MAC_ADDR, + ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_set_eth_mac_addr_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + struct nbl_resource_ops *res_ops; + struct nbl_channel_ops *chan_ops; + struct nbl_chan_param_set_eth_mac_addr *param; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + param = (struct nbl_chan_param_set_eth_mac_addr *)data; + + ret = NBL_OPS_CALL_RET(res_ops->set_eth_mac_addr, + (p, param->mac, param->eth_id)); + if (ret) + err = NBL_CHAN_RESP_ERR; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_SET_ETH_MAC_ADDR, msg_id, + err, NULL, 0); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_SET_ETH_MAC_ADDR); +} + +static int +nbl_disp_process_abnormal_event(void *priv, + struct nbl_abnormal_event_info *abnomal_info) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return res_ops->process_abnormal_event(p, abnomal_info); +} + +static void nbl_disp_adapt_desc_gother(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + NBL_OPS_CALL(res_ops->adapt_desc_gother, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static void nbl_disp_flr_clear_net(void *priv, u16 vf_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->flr_clear_net, p, vf_id); +} + +static void nbl_disp_flr_clear_queues(void *priv, u16 vf_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->flr_clear_queues, p, vf_id); +} + +static void nbl_disp_flr_clear_flows(void *priv, u16 vf_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->flr_clear_flows, p, vf_id); +} + +static void nbl_disp_flr_clear_interrupt(void *priv, u16 vf_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->flr_clear_interrupt, p, vf_id); +} + +static u16 nbl_disp_covert_vfid_to_vsi_id(void *priv, u16 vfid) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->covert_vfid_to_vsi_id, + p, vfid); +} + +static void nbl_disp_unmask_all_interrupts(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->unmask_all_interrupts, + NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)); +} + +static void nbl_disp_keep_alive_req(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_KEEP_ALIVE, NULL, 0, NULL, 0, 1); + + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_keep_alive_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_ack_info chan_ack; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_KEEP_ALIVE, msg_id, 0, NULL, + 0); + + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_chan_get_rep_queue_info_req(void *priv, u16 *queue_num, + u16 *queue_size) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_chan_param_get_queue_info result = { 0 }; + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + NBL_CHAN_SEND(chan_send, common->mgt_pf, + NBL_CHAN_MSG_GET_REP_QUEUE_INFO, NULL, 0, &result, + sizeof(result), 1); + + if (!chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send)) { + *queue_num = result.queue_num; + *queue_size = result.queue_size; + } +} + +static void nbl_disp_chan_get_rep_queue_info_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + struct nbl_chan_param_get_queue_info result = { 0 }; + int ret = NBL_CHAN_RESP_OK; + + NBL_OPS_CALL(res_ops->get_rep_queue_info, + (p, &result.queue_num, &result.queue_size)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_REP_QUEUE_INFO, msg_id, + ret, &result, sizeof(result)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_get_rep_queue_info(void *priv, u16 *queue_num, + u16 *queue_size) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL(res_ops->get_rep_queue_info, (p, queue_num, queue_size)); +} + +static int +nbl_disp_passthrough_fw_cmd(void *priv, + struct nbl_passthrough_fw_cmd *param, + struct nbl_passthrough_fw_cmd *result) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->passthrough_fw_cmd, + (p, param, result)); +} + +static int nbl_disp_chan_get_board_id_req(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + int result = -1; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_GET_BOARD_ID, NULL, 0, &result, + sizeof(result), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return result; +} + +static void nbl_disp_chan_get_board_id_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int ret = NBL_CHAN_RESP_OK, result = -1; + + result = NBL_OPS_CALL_RET(res_ops->get_board_id, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_BOARD_ID, msg_id, ret, + &result, sizeof(result)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_get_board_id(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_board_id, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); +} + +static dma_addr_t nbl_disp_restore_abnormal_ring(void *priv, int ring_index, + int type) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->restore_abnormal_ring, + (p, ring_index, type)); +} + +static int nbl_disp_restart_abnormal_ring(void *priv, int ring_index, int type) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->restart_abnormal_ring, + (p, ring_index, type)); +} + +static int nbl_disp_chan_stop_abnormal_hw_queue_req(void *priv, u16 vsi_id, + u16 local_queue_id, + int type) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_param_stop_abnormal_hw_queue param = { 0 }; + struct nbl_chan_send_info chan_send = { 0 }; + + param.vsi_id = vsi_id; + param.local_queue_id = local_queue_id; + param.type = type; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_STOP_ABNORMAL_HW_QUEUE, ¶m, + sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_stop_abnormal_hw_queue_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_param_stop_abnormal_hw_queue *param = NULL; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int ret = NBL_CHAN_RESP_OK; + + param = (struct nbl_chan_param_stop_abnormal_hw_queue *)data; + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->stop_abnormal_hw_queue, p, + param->vsi_id, param->local_queue_id, param->type); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_STOP_ABNORMAL_HW_QUEUE, + msg_id, ret, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_stop_abnormal_hw_queue(void *priv, u16 vsi_id, + u16 local_queue_id, int type) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->stop_abnormal_hw_queue, + p, vsi_id, local_queue_id, type); +} + +static int nbl_disp_stop_abnormal_sw_queue(void *priv, u16 local_queue_id, + int type) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->stop_abnormal_sw_queue, + p, local_queue_id, type); +} + +static u16 nbl_disp_get_local_queue_id(void *priv, u16 vsi_id, + u16 global_queue_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_local_queue_id, + (p, vsi_id, global_queue_id)); +} + +static u16 nbl_disp_get_vf_function_id(void *priv, u16 vsi_id, int vf_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_vf_function_id, + (p, vsi_id, vf_id)); +} + +static u16 nbl_disp_chan_get_vf_function_id_req(void *priv, u16 vsi_id, + int vf_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_chan_param_get_vf_func_id param; + struct nbl_common_info *common; + u16 func_id = 0; + + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + param.vsi_id = vsi_id; + param.vf_id = vf_id; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_GET_VF_FUNCTION_ID, ¶m, sizeof(param), + &func_id, sizeof(func_id), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return func_id; +} + +static void nbl_disp_chan_get_vf_function_id_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_param_get_vf_func_id *param; + struct nbl_chan_ack_info chan_ack; + int ret = NBL_CHAN_RESP_OK; + u16 func_id; + + param = (struct nbl_chan_param_get_vf_func_id *)data; + func_id = NBL_OPS_CALL_RET(res_ops->get_vf_function_id, + (p, param->vsi_id, param->vf_id)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_VF_FUNCTION_ID, msg_id, + ret, &func_id, sizeof(func_id)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static u16 nbl_disp_get_vf_vsi_id(void *priv, u16 vsi_id, int vf_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_vf_vsi_id, (p, vsi_id, vf_id)); +} + +static u16 nbl_disp_chan_get_vf_vsi_id_req(void *priv, u16 vsi_id, int vf_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_chan_param_get_vf_vsi_id param; + struct nbl_common_info *common; + u16 vf_vsi = 0; + + common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + param.vsi_id = vsi_id; + param.vf_id = vf_id; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_GET_VF_VSI_ID, ¶m, sizeof(param), + &vf_vsi, sizeof(vf_vsi), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return vf_vsi; +} + +static void nbl_disp_chan_get_vf_vsi_id_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_param_get_vf_vsi_id *param; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int ret = NBL_CHAN_RESP_OK; + u16 vsi_id; + + param = (struct nbl_chan_param_get_vf_vsi_id *)data; + vsi_id = NBL_OPS_CALL_RET(res_ops->get_vf_vsi_id, + (p, param->vsi_id, param->vf_id)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_VF_VSI_ID, msg_id, ret, + &vsi_id, sizeof(vsi_id)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static bool nbl_disp_check_vf_is_active(void *priv, u16 func_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->check_vf_is_active, (p, func_id)); + return ret; +} + +static bool nbl_disp_chan_check_vf_is_active_req(void *priv, u16 func_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + bool is_active; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_CHECK_VF_IS_ACTIVE, &func_id, sizeof(func_id), + &is_active, sizeof(is_active), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return is_active; +} + +static void nbl_disp_chan_check_vf_is_active_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct device *dev = NBL_COMMON_TO_DEV(disp_mgt->common); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u16 func_id; + bool is_active; + int ret; + + func_id = *(u16 *)data; + + is_active = NBL_OPS_CALL_RET(res_ops->check_vf_is_active, (p, func_id)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_CHECK_VF_IS_ACTIVE, msg_id, err, + &is_active, sizeof(is_active)); + ret = chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_CHECK_VF_IS_ACTIVE); +} + +static int +nbl_disp_get_ustore_total_pkt_drop_stats(void *priv, u8 eth_id, + struct nbl_ustore_stats *ustore_stats) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + int ret; + + ret = NBL_OPS_CALL_RET(res_ops->get_ustore_total_pkt_drop_stats, + (p, eth_id, ustore_stats)); + + return ret; +} + +static int +nbl_disp_chan_get_ustore_total_pkt_drop_stats_req(void *priv, u8 eth_id, + struct nbl_ustore_stats *p) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_GET_USTORE_TOTAL_PKT_DROP_STATS, ð_id, + sizeof(eth_id), p, sizeof(*p), 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_get_ustore_total_pkt_drop_stats_resp(void *priv, + u16 src_id, + u16 msg_id, + void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_ustore_stats ustore_stats = { 0 }; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int err = NBL_CHAN_RESP_OK; + u8 eth_id; + + eth_id = *(u8 *)data; + + err = NBL_OPS_CALL_RET(res_ops->get_ustore_total_pkt_drop_stats, + (p, eth_id, &ustore_stats)); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_GET_USTORE_TOTAL_PKT_DROP_STATS, + msg_id, err, &ustore_stats, sizeof(ustore_stats)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_get_link_forced(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_RET(res_ops->get_link_forced, (p, vsi_id)); +} + +static int nbl_disp_chan_get_link_forced_req(void *priv, u16 vsi_id) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + int link_forced = 0; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_GET_LINK_FORCED, &vsi_id, sizeof(vsi_id), + &link_forced, sizeof(link_forced), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + return link_forced; +} + +static void nbl_disp_chan_get_link_forced_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int ret; + + ret = NBL_OPS_CALL_RET(res_ops->get_link_forced, (p, *(u16 *)data)); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_LINK_FORCED, msg_id, + NBL_CHAN_RESP_OK, &ret, sizeof(ret)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_get_max_mtu(void *priv) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->get_max_mtu, + (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt))); + return ret; +} + +static int nbl_disp_set_mtu(void *priv, u16 vsi_id, u16 mtu) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_resource_ops *res_ops; + int ret; + + res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + ret = NBL_OPS_CALL_RET(res_ops->set_mtu, (p, vsi_id, mtu)); + return ret; +} + +static int nbl_disp_chan_set_mtu_req(void *priv, u16 vsi_id, u16 mtu) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_chan_param_set_mtu param = { 0 }; + + param.mtu = mtu; + param.vsi_id = vsi_id; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_MTU_SET, ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_set_mtu_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_ack_info chan_ack; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_param_set_mtu *param = NULL; + int err = NBL_CHAN_RESP_OK; + + param = (struct nbl_chan_param_set_mtu *)data; + err = NBL_OPS_CALL_RET(res_ops->set_mtu, + (p, param->vsi_id, param->mtu)); + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_MTU_SET, msg_id, err, NULL, + 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_set_hw_status(void *priv, enum nbl_hw_status hw_status) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->set_hw_status, p, hw_status); +} + +static void nbl_disp_get_active_func_bitmaps(void *priv, unsigned long *bitmap, + int max_func) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->get_active_func_bitmaps, p, bitmap, + max_func); +} + +static void nbl_disp_register_dev_name(void *priv, u16 vsi_id, char *name) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->register_dev_name, p, vsi_id, + name); +} + +static void nbl_disp_chan_register_dev_name_req(void *priv, u16 vsi_id, + char *name) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_chan_param_pf_name param = { 0 }; + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.vsi_id = vsi_id; + strscpy(param.dev_name, name, IFNAMSIZ); + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_REGISTER_PF_NAME, ¶m, sizeof(param), + NULL, 0, 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); +} + +static void nbl_disp_chan_register_dev_name_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_param_pf_name *param; + struct nbl_chan_ack_info chan_ack; + int ret = NBL_CHAN_RESP_OK; + + param = (struct nbl_chan_param_pf_name *)data; + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->register_dev_name, p, + param->vsi_id, param->dev_name); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_REGISTER_PF_NAME, msg_id, + ret, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static void nbl_disp_get_dev_name(void *priv, u16 vsi_id, char *name) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->get_dev_name, p, vsi_id, name); +} + +static void nbl_disp_chan_get_dev_name_req(void *priv, u16 vsi_id, char *name) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_chan_param_pf_name param = { 0 }; + struct nbl_chan_param_pf_name resp = { 0 }; + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.vsi_id = vsi_id; + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_GET_PF_NAME, ¶m, sizeof(param), &resp, + sizeof(resp), 1); + chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send); + + strscpy(name, resp.dev_name, IFNAMSIZ); +} + +static void nbl_disp_chan_get_dev_name_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_param_pf_name *param; + struct nbl_chan_param_pf_name resp = { 0 }; + struct nbl_chan_ack_info chan_ack; + int ret = NBL_CHAN_RESP_OK; + + param = (struct nbl_chan_param_pf_name *)data; + NBL_OPS_CALL_LOCK(disp_mgt, res_ops->get_dev_name, p, param->vsi_id, + resp.dev_name); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_PF_NAME, msg_id, ret, + &resp, sizeof(resp)); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +static int nbl_disp_check_flow_table_spec(void *priv, u16 vlan_list_cnt, + u16 unicast_mac_cnt, + u16 multi_mac_cnt) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + + return NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->check_flow_table_spec, + p, vlan_list_cnt, unicast_mac_cnt, + multi_mac_cnt); +} + +static int nbl_disp_chan_check_flow_table_spec_req(void *priv, + u16 vlan_list_cnt, + u16 unicast_mac_cnt, + u16 multi_mac_cnt) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + struct nbl_chan_param_check_flow_spec param = { 0 }; + struct nbl_common_info *common = NBL_DISP_MGT_TO_COMMON(disp_mgt); + + param.vlan_list_cnt = vlan_list_cnt; + param.unicast_mac_cnt = unicast_mac_cnt; + param.multi_mac_cnt = multi_mac_cnt; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_CHECK_FLOWTABLE_SPEC, ¶m, sizeof(param), + NULL, 0, 1); + + return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + &chan_send); +} + +static void nbl_disp_chan_check_flow_table_spec_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt); + struct nbl_chan_param_check_flow_spec *param = NULL; + void *p = NBL_DISP_MGT_TO_RES_PRIV(disp_mgt); + struct nbl_chan_ack_info chan_ack; + int ret; + + param = (struct nbl_chan_param_check_flow_spec *)data; + ret = NBL_OPS_CALL_LOCK_RET(disp_mgt, res_ops->check_flow_table_spec, p, + param->vlan_list_cnt, + param->unicast_mac_cnt, + param->multi_mac_cnt); + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_CHECK_FLOWTABLE_SPEC, + msg_id, ret, NULL, 0); + chan_ops->send_ack(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_ack); +} + +/* NBL_DISP_SET_OPS(disp_op_name, func, ctrl_lvl, msg_type, msg_req, msg_resp) + * ctrl_lvl is to define when this disp_op should go directly to res_op, + * not sending a channel msg. + * Use X Macros to reduce codes in channel_op and disp_op setup/remove + */ +#define NBL_DISP_OPS_TBL \ +do { \ + NBL_DISP_SET_OPS(init_chip_module, nbl_disp_init_chip_module, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(deinit_chip_module, \ + nbl_disp_deinit_chip_module, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(get_resource_pt_ops, nbl_disp_get_res_pt_ops, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(queue_init, nbl_disp_queue_init, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(vsi_init, nbl_disp_vsi_init, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(init_vf_msix_map, nbl_disp_init_vf_msix_map, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_INIT_VF_MSIX_MAP, \ + nbl_disp_chan_init_vf_msix_map_req, \ + nbl_disp_chan_init_vf_msix_map_resp); \ + NBL_DISP_SET_OPS(configure_msix_map, \ + nbl_disp_configure_msix_map, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_CONFIGURE_MSIX_MAP, \ + nbl_disp_chan_configure_msix_map_req, \ + nbl_disp_chan_configure_msix_map_resp); \ + NBL_DISP_SET_OPS(destroy_msix_map, nbl_disp_destroy_msix_map, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_DESTROY_MSIX_MAP, \ + nbl_disp_chan_destroy_msix_map_req, \ + nbl_disp_chan_destroy_msix_map_resp); \ + NBL_DISP_SET_OPS(enable_mailbox_irq, \ + nbl_disp_enable_mailbox_irq, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ, \ + nbl_disp_chan_enable_mailbox_irq_req, \ + nbl_disp_chan_enable_mailbox_irq_resp); \ + NBL_DISP_SET_OPS(enable_abnormal_irq, \ + nbl_disp_enable_abnormal_irq, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(enable_adminq_irq, \ + nbl_disp_enable_adminq_irq, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(get_global_vector, nbl_disp_get_global_vector, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_GET_GLOBAL_VECTOR, \ + nbl_disp_chan_get_global_vector_req, \ + nbl_disp_chan_get_global_vector_resp); \ + NBL_DISP_SET_OPS(get_msix_entry_id, nbl_disp_get_msix_entry_id, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(alloc_rings, nbl_disp_alloc_rings, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(remove_rings, nbl_disp_remove_rings, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(start_tx_ring, nbl_disp_start_tx_ring, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(stop_tx_ring, nbl_disp_stop_tx_ring, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(start_rx_ring, nbl_disp_start_rx_ring, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(stop_rx_ring, nbl_disp_stop_rx_ring, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(kick_rx_ring, nbl_disp_kick_rx_ring, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(get_vector_napi, nbl_disp_get_vector_napi, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(set_vector_info, nbl_disp_set_vector_info, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(register_vsi_ring, nbl_disp_register_vsi_ring, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(register_net, nbl_disp_register_net, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_REGISTER_NET, \ + nbl_disp_chan_register_net_req, \ + nbl_disp_chan_register_net_resp); \ + NBL_DISP_SET_OPS(unregister_net, nbl_disp_unregister_net, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_UNREGISTER_NET, \ + nbl_disp_chan_unregister_net_req, \ + nbl_disp_chan_unregister_net_resp); \ + NBL_DISP_SET_OPS(alloc_txrx_queues, nbl_disp_alloc_txrx_queues, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_ALLOC_TXRX_QUEUES, \ + nbl_disp_chan_alloc_txrx_queues_req, \ + nbl_disp_chan_alloc_txrx_queues_resp); \ + NBL_DISP_SET_OPS(free_txrx_queues, nbl_disp_free_txrx_queues, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_FREE_TXRX_QUEUES, \ + nbl_disp_chan_free_txrx_queues_req, \ + nbl_disp_chan_free_txrx_queues_resp); \ + NBL_DISP_SET_OPS(register_vsi2q, nbl_disp_register_vsi2q, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_REGISTER_VSI2Q, \ + nbl_disp_chan_register_vsi2q_req, \ + nbl_disp_chan_register_vsi2q_resp); \ + NBL_DISP_SET_OPS(setup_q2vsi, nbl_disp_setup_q2vsi, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_Q2VSI,\ + nbl_disp_chan_setup_q2vsi_req, \ + nbl_disp_chan_setup_q2vsi_resp); \ + NBL_DISP_SET_OPS(remove_q2vsi, nbl_disp_remove_q2vsi, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_Q2VSI,\ + nbl_disp_chan_remove_q2vsi_req, \ + nbl_disp_chan_remove_q2vsi_resp); \ + NBL_DISP_SET_OPS(setup_rss, nbl_disp_setup_rss, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_RSS, \ + nbl_disp_chan_setup_rss_req, \ + nbl_disp_chan_setup_rss_resp); \ + NBL_DISP_SET_OPS(remove_rss, nbl_disp_remove_rss, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_RSS,\ + nbl_disp_chan_remove_rss_req, \ + nbl_disp_chan_remove_rss_resp); \ + NBL_DISP_SET_OPS(setup_queue, nbl_disp_setup_queue, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_QUEUE,\ + nbl_disp_chan_setup_queue_req, \ + nbl_disp_chan_setup_queue_resp); \ + NBL_DISP_SET_OPS(remove_all_queues, nbl_disp_remove_all_queues, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_REMOVE_ALL_QUEUES, \ + nbl_disp_chan_remove_all_queues_req, \ + nbl_disp_chan_remove_all_queues_resp); \ + NBL_DISP_SET_OPS(cfg_dsch, nbl_disp_cfg_dsch, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_CFG_DSCH, \ + nbl_disp_chan_cfg_dsch_req, \ + nbl_disp_chan_cfg_dsch_resp); \ + NBL_DISP_SET_OPS(setup_cqs, nbl_disp_setup_cqs, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_CQS, \ + nbl_disp_chan_setup_cqs_req, \ + nbl_disp_chan_setup_cqs_resp); \ + NBL_DISP_SET_OPS(remove_cqs, nbl_disp_remove_cqs, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_CQS,\ + nbl_disp_chan_remove_cqs_req, \ + nbl_disp_chan_remove_cqs_resp); \ + NBL_DISP_SET_OPS(get_msix_irq_enable_info, \ + nbl_disp_get_msix_irq_enable_info, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(add_macvlan, nbl_disp_add_macvlan, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_ADD_MACVLAN,\ + nbl_disp_chan_add_macvlan_req, \ + nbl_disp_chan_add_macvlan_resp); \ + NBL_DISP_SET_OPS(del_macvlan, nbl_disp_del_macvlan, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_DEL_MACVLAN,\ + nbl_disp_chan_del_macvlan_req, \ + nbl_disp_chan_del_macvlan_resp); \ + NBL_DISP_SET_OPS(add_multi_rule, nbl_disp_add_multi_rule, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_ADD_MULTI_RULE,\ + nbl_disp_chan_add_multi_rule_req, \ + nbl_disp_chan_add_multi_rule_resp); \ + NBL_DISP_SET_OPS(del_multi_rule, nbl_disp_del_multi_rule, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_DEL_MULTI_RULE,\ + nbl_disp_chan_del_multi_rule_req, \ + nbl_disp_chan_del_multi_rule_resp); \ + NBL_DISP_SET_OPS(cfg_multi_mcast, nbl_disp_cfg_multi_mcast, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_CFG_MULTI_MCAST_RULE, \ + nbl_disp_chan_cfg_multi_mcast_req, \ + nbl_disp_chan_cfg_multi_mcast_resp); \ + NBL_DISP_SET_OPS(setup_multi_group, nbl_disp_setup_multi_group, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_SETUP_MULTI_GROUP, \ + nbl_disp_chan_setup_multi_group_req, \ + nbl_disp_chan_setup_multi_group_resp); \ + NBL_DISP_SET_OPS(remove_multi_group, nbl_disp_remove_multi_group,\ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_REMOVE_MULTI_GROUP,\ + nbl_disp_chan_remove_multi_group_req, \ + nbl_disp_chan_remove_multi_group_resp); \ + NBL_DISP_SET_OPS(get_vsi_id, nbl_disp_get_vsi_id, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_VSI_ID,\ + nbl_disp_chan_get_vsi_id_req, \ + nbl_disp_chan_get_vsi_id_resp); \ + NBL_DISP_SET_OPS(get_eth_id, nbl_disp_get_eth_id, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_ETH_ID,\ + nbl_disp_chan_get_eth_id_req, \ + nbl_disp_chan_get_eth_id_resp); \ + NBL_DISP_SET_OPS(add_lldp_flow, nbl_disp_add_lldp_flow, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_ADD_LLDP_FLOW,\ + nbl_disp_chan_add_lldp_flow_req, \ + nbl_disp_chan_add_lldp_flow_resp); \ + NBL_DISP_SET_OPS(del_lldp_flow, nbl_disp_del_lldp_flow, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_DEL_LLDP_FLOW,\ + nbl_disp_chan_del_lldp_flow_req, \ + nbl_disp_chan_del_lldp_flow_resp); \ + NBL_DISP_SET_OPS(set_promisc_mode, nbl_disp_set_promisc_mode, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SET_PROSISC_MODE,\ + nbl_disp_chan_set_promisc_mode_req, \ + nbl_disp_chan_set_promisc_mode_resp); \ + NBL_DISP_SET_OPS(get_tx_headroom, nbl_disp_get_tx_headroom, \ + NBL_DISP_CTRL_LVL_NET, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(get_net_stats, nbl_disp_get_net_stats, \ + NBL_DISP_CTRL_LVL_NET, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_rxfh_indir_size, nbl_disp_get_rxfh_indir_size,\ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE,\ + nbl_disp_chan_get_rxfh_indir_size_req, \ + nbl_disp_chan_get_rxfh_indir_size_resp); \ + NBL_DISP_SET_OPS(set_rxfh_indir, nbl_disp_set_rxfh_indir, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SET_RXFH_INDIR,\ + nbl_disp_chan_set_rxfh_indir_req, \ + nbl_disp_chan_set_rxfh_indir_resp); \ + NBL_DISP_SET_OPS(cfg_txrx_vlan, nbl_disp_cfg_txrx_vlan, \ + NBL_DISP_CTRL_LVL_NET, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_hw_addr, nbl_disp_get_hw_addr, \ + NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_function_id, nbl_disp_get_function_id, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_FUNCTION_ID,\ + nbl_disp_chan_get_function_id_req, \ + nbl_disp_chan_get_function_id_resp); \ + NBL_DISP_SET_OPS(get_real_bdf, nbl_disp_get_real_bdf, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_REAL_BDF,\ + nbl_disp_chan_get_real_bdf_req, \ + nbl_disp_chan_get_real_bdf_resp); \ + NBL_DISP_SET_OPS(check_fw_heartbeat, nbl_disp_check_fw_heartbeat,\ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(check_fw_reset, nbl_disp_check_fw_reset, \ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(set_sfp_state, nbl_disp_set_sfp_state, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SET_SFP_STATE,\ + nbl_disp_chan_set_sfp_state_req, \ + nbl_disp_chan_set_sfp_state_resp); \ + NBL_DISP_SET_OPS(passthrough_fw_cmd, nbl_disp_passthrough_fw_cmd,\ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_product_fix_cap, nbl_disp_get_product_fix_cap,\ + NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_mbx_irq_num, nbl_disp_get_mbx_irq_num, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_MBX_IRQ_NUM,\ + nbl_disp_chan_get_mbx_irq_num_req, \ + nbl_disp_chan_get_mbx_irq_num_resp); \ + NBL_DISP_SET_OPS(get_adminq_irq_num, nbl_disp_get_adminq_irq_num,\ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_abnormal_irq_num, nbl_disp_get_abnormal_irq_num,\ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(clear_flow, nbl_disp_clear_flow, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_CLEAR_FLOW,\ + nbl_disp_chan_clear_flow_req, \ + nbl_disp_chan_clear_flow_resp); \ + NBL_DISP_SET_OPS(clear_queues, nbl_disp_clear_queues, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_CLEAR_QUEUE,\ + nbl_disp_chan_clear_queues_req, \ + nbl_disp_chan_clear_queues_resp); \ + NBL_DISP_SET_OPS(get_board_id, nbl_disp_get_board_id, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_BOARD_ID,\ + nbl_disp_chan_get_board_id_req, \ + nbl_disp_chan_get_board_id_resp); \ + NBL_DISP_SET_OPS(restore_abnormal_ring, \ + nbl_disp_restore_abnormal_ring, \ + NBL_DISP_CTRL_LVL_NET, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(restart_abnormal_ring, \ + nbl_disp_restart_abnormal_ring, \ + NBL_DISP_CTRL_LVL_NET, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(stop_abnormal_hw_queue, \ + nbl_disp_stop_abnormal_hw_queue, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_STOP_ABNORMAL_HW_QUEUE, \ + nbl_disp_chan_stop_abnormal_hw_queue_req, \ + nbl_disp_chan_stop_abnormal_hw_queue_resp); \ + NBL_DISP_SET_OPS(stop_abnormal_sw_queue, \ + nbl_disp_stop_abnormal_sw_queue, \ + NBL_DISP_CTRL_LVL_NET, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_local_queue_id, nbl_disp_get_local_queue_id,\ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_vsi_global_queue_id, nbl_disp_get_vsi_global_qid,\ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_GET_VSI_GLOBAL_QUEUE_ID, \ + nbl_disp_chan_get_vsi_global_qid_req, \ + nbl_disp_chan_get_vsi_global_qid_resp); \ + NBL_DISP_SET_OPS(get_port_attributes, nbl_disp_get_port_attributes,\ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(update_ring_num, nbl_disp_update_ring_num, \ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(set_ring_num, nbl_disp_set_ring_num, \ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_part_number, nbl_disp_get_part_number, \ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_serial_number, nbl_disp_get_serial_number, \ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(enable_port, nbl_disp_enable_port, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(dummy_func, NULL, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_ADMINQ_PORT_NOTIFY, \ + NULL, \ + nbl_disp_chan_recv_port_notify_resp); \ + NBL_DISP_SET_OPS(get_link_state, nbl_disp_get_link_state, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_LINK_STATE,\ + nbl_disp_chan_get_link_state_req, \ + nbl_disp_chan_get_link_state_resp); \ + NBL_DISP_SET_OPS(set_wol, nbl_disp_set_wol, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SET_WOL, \ + nbl_disp_chan_set_wol_req, \ + nbl_disp_chan_set_wol_resp); \ + NBL_DISP_SET_OPS(set_eth_mac_addr, nbl_disp_set_eth_mac_addr, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SET_ETH_MAC_ADDR,\ + nbl_disp_chan_set_eth_mac_addr_req, \ + nbl_disp_chan_set_eth_mac_addr_resp); \ + NBL_DISP_SET_OPS(process_abnormal_event, \ + nbl_disp_process_abnormal_event, \ + NBL_DISP_CTRL_LVL_MGT, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(adapt_desc_gother, nbl_disp_adapt_desc_gother, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(flr_clear_net, nbl_disp_flr_clear_net, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(flr_clear_queues, nbl_disp_flr_clear_queues, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(flr_clear_flows, nbl_disp_flr_clear_flows, \ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(flr_clear_interrupt, nbl_disp_flr_clear_interrupt,\ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(covert_vfid_to_vsi_id, nbl_disp_covert_vfid_to_vsi_id,\ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(unmask_all_interrupts, nbl_disp_unmask_all_interrupts,\ + NBL_DISP_CTRL_LVL_MGT, -1, \ + NULL, NULL); \ + NBL_DISP_SET_OPS(keep_alive, nbl_disp_keep_alive_req, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_KEEP_ALIVE,\ + nbl_disp_keep_alive_req, \ + nbl_disp_chan_keep_alive_resp); \ + NBL_DISP_SET_OPS(get_rep_queue_info, nbl_disp_get_rep_queue_info,\ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_GET_REP_QUEUE_INFO, \ + nbl_disp_chan_get_rep_queue_info_req, \ + nbl_disp_chan_get_rep_queue_info_resp); \ + NBL_DISP_SET_OPS(get_vf_function_id, nbl_disp_get_vf_function_id,\ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_GET_VF_FUNCTION_ID, \ + nbl_disp_chan_get_vf_function_id_req, \ + nbl_disp_chan_get_vf_function_id_resp); \ + NBL_DISP_SET_OPS(get_vf_vsi_id, nbl_disp_get_vf_vsi_id, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_VF_VSI_ID,\ + nbl_disp_chan_get_vf_vsi_id_req, \ + nbl_disp_chan_get_vf_vsi_id_resp); \ + NBL_DISP_SET_OPS(check_vf_is_active, nbl_disp_check_vf_is_active,\ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_CHECK_VF_IS_ACTIVE,\ + nbl_disp_chan_check_vf_is_active_req, \ + nbl_disp_chan_check_vf_is_active_resp); \ + NBL_DISP_SET_OPS(get_ustore_total_pkt_drop_stats, \ + nbl_disp_get_ustore_total_pkt_drop_stats, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_GET_USTORE_TOTAL_PKT_DROP_STATS, \ + nbl_disp_chan_get_ustore_total_pkt_drop_stats_req,\ + nbl_disp_chan_get_ustore_total_pkt_drop_stats_resp);\ + NBL_DISP_SET_OPS(get_link_forced, nbl_disp_get_link_forced, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_LINK_FORCED,\ + nbl_disp_chan_get_link_forced_req, \ + nbl_disp_chan_get_link_forced_resp); \ + NBL_DISP_SET_OPS(set_mtu, nbl_disp_set_mtu, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_MTU_SET, \ + nbl_disp_chan_set_mtu_req, \ + nbl_disp_chan_set_mtu_resp); \ + NBL_DISP_SET_OPS(get_max_mtu, nbl_disp_get_max_mtu, \ + NBL_DISP_CTRL_LVL_NET, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(set_hw_status, nbl_disp_set_hw_status, \ + NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(get_active_func_bitmaps, \ + nbl_disp_get_active_func_bitmaps, \ + NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \ + NBL_DISP_SET_OPS(register_dev_name, nbl_disp_register_dev_name, \ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_REGISTER_PF_NAME, \ + nbl_disp_chan_register_dev_name_req, \ + nbl_disp_chan_register_dev_name_resp); \ + NBL_DISP_SET_OPS(get_dev_name, nbl_disp_get_dev_name, \ + NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_PF_NAME,\ + nbl_disp_chan_get_dev_name_req, \ + nbl_disp_chan_get_dev_name_resp); \ + NBL_DISP_SET_OPS(check_flow_table_spec, nbl_disp_check_flow_table_spec,\ + NBL_DISP_CTRL_LVL_MGT, \ + NBL_CHAN_MSG_CHECK_FLOWTABLE_SPEC, \ + nbl_disp_chan_check_flow_table_spec_req, \ + nbl_disp_chan_check_flow_table_spec_resp); \ +} while (0) + +/* Structure starts here, adding an op should not modify anything below */ +static int nbl_disp_setup_msg(struct nbl_dispatch_mgt *disp_mgt) +{ + struct nbl_dispatch_ops *disp_ops = NBL_DISP_MGT_TO_DISP_OPS(disp_mgt); + struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt); + void *p = NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt); + int ret = 0; + + if (!chan_ops->check_queue_exist(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), + NBL_CHAN_TYPE_MAILBOX)) + return 0; + + mutex_init(&disp_mgt->ops_mutex_lock); + spin_lock_init(&disp_mgt->ops_spin_lock); + disp_mgt->ops_lock_required = true; + +#define NBL_DISP_SET_OPS(disp_op, func, ctrl, msg_type, msg_req, resp) \ +do { \ + typeof(msg_type) _msg_type = (msg_type); \ + typeof(ctrl) _ctrl_lvl = (ctrl); \ + (void)(disp_ops->NBL_NAME(disp_op)); \ + (void)(func); \ + (void)(msg_req); \ + (void)_ctrl_lvl; \ + if (_msg_type >= 0) \ + ret += chan_ops->register_msg(p, _msg_type, resp, disp_mgt);\ +} while (0) + NBL_DISP_OPS_TBL; +#undef NBL_DISP_SET_OPS + + return ret; +} + +/* Ctrl lvl means that if a certain level is set, then all disp_ops that + * decleared this lvl will go directly to res_ops, rather than send a + * channel msg, and vice versa. + */ +static int nbl_disp_setup_ctrl_lvl(struct nbl_dispatch_mgt *disp_mgt, u32 lvl) +{ + struct nbl_dispatch_ops *disp_ops; + + disp_ops = NBL_DISP_MGT_TO_DISP_OPS(disp_mgt); + + set_bit(lvl, disp_mgt->ctrl_lvl); + +#define NBL_DISP_SET_OPS(disp_op, func, ctrl, msg_type, msg_req, msg_resp) \ +do { \ + typeof(msg_type) _msg_type = (msg_type); \ + (void)(_msg_type); \ + (void)(msg_resp); \ + disp_ops->NBL_NAME(disp_op) = \ + test_bit(ctrl, disp_mgt->ctrl_lvl) ? func : msg_req; ;\ +} while (0) + NBL_DISP_OPS_TBL; +#undef NBL_DISP_SET_OPS + + return 0; +} + +static int nbl_disp_setup_disp_mgt(struct nbl_common_info *common, + struct nbl_dispatch_mgt **disp_mgt) +{ + struct device *dev; + + dev = NBL_COMMON_TO_DEV(common); + *disp_mgt = + devm_kzalloc(dev, sizeof(struct nbl_dispatch_mgt), GFP_KERNEL); + if (!*disp_mgt) + return -ENOMEM; + + NBL_DISP_MGT_TO_COMMON(*disp_mgt) = common; + return 0; +} + +static void nbl_disp_remove_disp_mgt(struct nbl_common_info *common, + struct nbl_dispatch_mgt **disp_mgt) +{ + struct device *dev; + + dev = NBL_COMMON_TO_DEV(common); + devm_kfree(dev, *disp_mgt); + *disp_mgt = NULL; +} + +static void nbl_disp_remove_ops(struct device *dev, + struct nbl_dispatch_ops_tbl **disp_ops_tbl) +{ + devm_kfree(dev, NBL_DISP_OPS_TBL_TO_OPS(*disp_ops_tbl)); + devm_kfree(dev, *disp_ops_tbl); + *disp_ops_tbl = NULL; +} + +static int nbl_disp_setup_ops(struct device *dev, + struct nbl_dispatch_ops_tbl **disp_ops_tbl, + struct nbl_dispatch_mgt *disp_mgt) +{ + struct nbl_dispatch_ops *disp_ops; + + *disp_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_dispatch_ops_tbl), + GFP_KERNEL); + if (!*disp_ops_tbl) + return -ENOMEM; + + disp_ops = + devm_kzalloc(dev, sizeof(struct nbl_dispatch_ops), GFP_KERNEL); + if (!disp_ops) + return -ENOMEM; + + NBL_DISP_OPS_TBL_TO_OPS(*disp_ops_tbl) = disp_ops; + NBL_DISP_OPS_TBL_TO_PRIV(*disp_ops_tbl) = disp_mgt; + + return 0; +} + +int nbl_disp_init(void *p, struct nbl_init_param *param) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct nbl_dispatch_mgt **disp_mgt = + (struct nbl_dispatch_mgt **)&NBL_ADAP_TO_DISP_MGT(adapter); + struct nbl_dispatch_ops_tbl **disp_ops_tbl = + &NBL_ADAP_TO_DISP_OPS_TBL(adapter); + struct nbl_resource_ops_tbl *res_ops_tbl = + NBL_ADAP_TO_RES_OPS_TBL(adapter); + struct nbl_channel_ops_tbl *chan_ops_tbl = + NBL_ADAP_TO_CHAN_OPS_TBL(adapter); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct device *dev = NBL_ADAP_TO_DEV(adapter); + int ret; + + ret = nbl_disp_setup_disp_mgt(common, disp_mgt); + if (ret) + goto setup_mgt_fail; + + ret = nbl_disp_setup_ops(dev, disp_ops_tbl, *disp_mgt); + if (ret) + goto setup_ops_fail; + + NBL_DISP_MGT_TO_RES_OPS_TBL(*disp_mgt) = res_ops_tbl; + NBL_DISP_MGT_TO_CHAN_OPS_TBL(*disp_mgt) = chan_ops_tbl; + NBL_DISP_MGT_TO_DISP_OPS_TBL(*disp_mgt) = *disp_ops_tbl; + + ret = nbl_disp_setup_msg(*disp_mgt); + if (ret) + goto setup_msg_fail; + + if (param->caps.has_ctrl) { + ret = nbl_disp_setup_ctrl_lvl(*disp_mgt, NBL_DISP_CTRL_LVL_MGT); + if (ret) + goto setup_msg_fail; + } + + if (param->caps.has_net) { + ret = nbl_disp_setup_ctrl_lvl(*disp_mgt, NBL_DISP_CTRL_LVL_NET); + if (ret) + goto setup_msg_fail; + } + + ret = nbl_disp_setup_ctrl_lvl(*disp_mgt, NBL_DISP_CTRL_LVL_ALWAYS); + if (ret) + goto setup_msg_fail; + + return 0; + +setup_msg_fail: + nbl_disp_remove_ops(dev, disp_ops_tbl); +setup_ops_fail: + nbl_disp_remove_disp_mgt(common, disp_mgt); +setup_mgt_fail: + return ret; +} + +void nbl_disp_remove(void *p) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct nbl_dispatch_ops_tbl **disp_ops_tbl; + struct nbl_dispatch_mgt **disp_mgt; + struct nbl_common_info *common; + struct device *dev; + + dev = NBL_ADAP_TO_DEV(adapter); + common = NBL_ADAP_TO_COMMON(adapter); + disp_mgt = (struct nbl_dispatch_mgt **)&NBL_ADAP_TO_DISP_MGT(adapter); + disp_ops_tbl = &NBL_ADAP_TO_DISP_OPS_TBL(adapter); + + nbl_disp_remove_ops(dev, disp_ops_tbl); + + nbl_disp_remove_disp_mgt(common, disp_mgt); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h new file mode 100644 index 000000000000..541603b52054 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dispatch.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DISPATCH_H_ +#define _NBL_DISPATCH_H_ + +#include "nbl_core.h" + +#define NBL_DISP_MGT_TO_COMMON(disp_mgt) ((disp_mgt)->common) +#define NBL_DISP_MGT_TO_DEV(disp_mgt) \ + NBL_COMMON_TO_DEV(NBL_DISP_MGT_TO_COMMON(disp_mgt)) + +#define NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt) ((disp_mgt)->res_ops_tbl) +#define NBL_DISP_MGT_TO_RES_OPS(disp_mgt) \ + (NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt)->ops) +#define NBL_DISP_MGT_TO_RES_PRIV(disp_mgt) \ + (NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt)->priv) +#define NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt) ((disp_mgt)->chan_ops_tbl) +#define NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt) \ + (NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt)->ops) +#define NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt) \ + (NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt)->priv) +#define NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt) ((disp_mgt)->disp_ops_tbl) +#define NBL_DISP_MGT_TO_DISP_OPS(disp_mgt) \ + (NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->ops) +#define NBL_DISP_MGT_TO_DISP_PRIV(disp_mgt) \ + (NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->priv) + +#define NBL_OPS_CALL_LOCK(disp_mgt, func, ...) \ +do { \ + typeof(disp_mgt) _disp_mgt = (disp_mgt); \ + typeof(func) _func = (func); \ + \ + if (_disp_mgt->ops_lock_required) \ + mutex_lock(&_disp_mgt->ops_mutex_lock); \ + \ + if (_func) \ + _func(__VA_ARGS__); \ + \ + if (_disp_mgt->ops_lock_required) \ + mutex_unlock(&_disp_mgt->ops_mutex_lock); \ +} while (0) + +#define NBL_OPS_CALL_LOCK_RET(disp_mgt, func, ...) \ +({ \ + typeof(disp_mgt) _disp_mgt = (disp_mgt); \ + typeof(func) _func = (func); \ + typeof(_func(__VA_ARGS__)) _ret = 0; \ + \ + if (_disp_mgt->ops_lock_required) \ + mutex_lock(&_disp_mgt->ops_mutex_lock); \ + \ + if (_func) \ + _ret = _func(__VA_ARGS__); \ + \ + if (_disp_mgt->ops_lock_required) \ + mutex_unlock(&_disp_mgt->ops_mutex_lock); \ + \ + _ret; \ +}) + +struct nbl_dispatch_mgt { + struct nbl_common_info *common; + struct nbl_resource_ops_tbl *res_ops_tbl; + struct nbl_channel_ops_tbl *chan_ops_tbl; + struct nbl_dispatch_ops_tbl *disp_ops_tbl; + DECLARE_BITMAP(ctrl_lvl, NBL_DISP_CTRL_LVL_MAX); + /* use for the caller not in interrupt */ + struct mutex ops_mutex_lock; + /* use for the caller is in interrupt or other can't sleep thread */ + spinlock_t ops_spin_lock; + bool ops_lock_required; +}; + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h new file mode 100644 index 000000000000..852cfea3c9c3 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dispatch.h @@ -0,0 +1,190 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEF_DISPATCH_H_ +#define _NBL_DEF_DISPATCH_H_ + +#include "nbl_include.h" + +#define NBL_DISP_OPS_TBL_TO_OPS(disp_ops_tbl) ((disp_ops_tbl)->ops) +#define NBL_DISP_OPS_TBL_TO_PRIV(disp_ops_tbl) ((disp_ops_tbl)->priv) + +enum { + NBL_DISP_CTRL_LVL_NEVER = 0, + NBL_DISP_CTRL_LVL_MGT, + NBL_DISP_CTRL_LVL_NET, + NBL_DISP_CTRL_LVL_ALWAYS, + NBL_DISP_CTRL_LVL_MAX, +}; + +struct nbl_dispatch_ops { + int (*init_chip_module)(void *priv); + void (*deinit_chip_module)(void *priv); + void (*get_resource_pt_ops)(void *priv, + struct nbl_resource_pt_ops *pt_ops); + int (*queue_init)(void *priv); + int (*vsi_init)(void *priv); + int (*init_vf_msix_map)(void *priv, u16 func_id, bool enable); + int (*configure_msix_map)(void *priv, u16 num_net_msix, + u16 num_others_msix, bool net_msix_mask_en); + int (*destroy_msix_map)(void *priv); + int (*enable_mailbox_irq)(void *p, u16 vector_id, bool enable_msix); + int (*enable_abnormal_irq)(void *p, u16 vector_id, bool enable_msix); + int (*enable_adminq_irq)(void *p, u16 vector_id, bool enable_msix); + u16 (*get_global_vector)(void *priv, u16 vsi_id, u16 local_vec_id); + u16 (*get_msix_entry_id)(void *priv, u16 vsi_id, u16 local_vec_id); + + int (*get_mbx_irq_num)(void *priv); + int (*get_adminq_irq_num)(void *priv); + int (*get_abnormal_irq_num)(void *priv); + int (*alloc_rings)(void *priv, struct net_device *netdev, + struct nbl_ring_param *param); + void (*remove_rings)(void *priv); + dma_addr_t (*start_tx_ring)(void *priv, u8 ring_index); + void (*stop_tx_ring)(void *priv, u8 ring_index); + dma_addr_t (*start_rx_ring)(void *priv, u8 ring_index, bool use_napi); + void (*stop_rx_ring)(void *priv, u8 ring_index); + void (*kick_rx_ring)(void *priv, u16 index); + struct nbl_napi_struct *(*get_vector_napi)(void *priv, u16 index); + void (*set_vector_info)(void *priv, u8 __iomem *irq_enable_base, + u32 irq_data, u16 index, bool mask_en); + int (*register_net)(void *priv, + struct nbl_register_net_param *register_param, + struct nbl_register_net_result *register_result); + void (*register_vsi_ring)(void *priv, u16 vsi_index, u16 ring_offset, + u16 ring_num); + int (*unregister_net)(void *priv); + int (*alloc_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num); + void (*free_txrx_queues)(void *priv, u16 vsi_id); + int (*setup_queue)(void *priv, struct nbl_txrx_queue_param *param, + bool is_tx); + void (*remove_all_queues)(void *priv, u16 vsi_id); + int (*register_vsi2q)(void *priv, u16 vsi_index, u16 vsi_id, + u16 queue_offset, u16 queue_num); + int (*setup_q2vsi)(void *priv, u16 vsi_id); + void (*remove_q2vsi)(void *priv, u16 vsi_id); + int (*setup_rss)(void *priv, u16 vsi_id); + void (*remove_rss)(void *priv, u16 vsi_id); + int (*cfg_dsch)(void *priv, u16 vsi_id, bool vld); + int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps, + bool rss_indir_set); + void (*remove_cqs)(void *priv, u16 vsi_id); + + void (*clear_queues)(void *priv, u16 vsi_id); + + u16 (*get_vsi_global_qid)(void *priv, u16 vsi_id, u16 local_qid); + u16 (*get_local_queue_id)(void *priv, u16 vsi_id, u16 global_queue_id); + u16 (*get_vsi_global_queue_id)(void *priv, u16 vsi_id, u16 local_qid); + + u8 __iomem *(*get_msix_irq_enable_info)(void *priv, u16 global_vec_id, + u32 *irq_data); + + int (*add_macvlan)(void *priv, u8 *mac, u16 vlan, u16 vsi); + void (*del_macvlan)(void *priv, u8 *mac, u16 vlan, u16 vsi); + int (*add_lldp_flow)(void *priv, u16 vsi); + void (*del_lldp_flow)(void *priv, u16 vsi); + int (*add_multi_rule)(void *priv, u16 vsi); + void (*del_multi_rule)(void *priv, u16 vsi); + int (*cfg_multi_mcast)(void *priv, u16 vsi, u16 enable); + int (*setup_multi_group)(void *priv); + void (*remove_multi_group)(void *priv); + void (*clear_flow)(void *priv, u16 vsi_id); + + u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type); + void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id, + u8 *logic_eth_id); + int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode); + int (*set_mtu)(void *priv, u16 vsi_id, u16 mtu); + int (*get_max_mtu)(void *priv); + u32 (*get_tx_headroom)(void *priv); + void (*get_rep_queue_info)(void *priv, u16 *queue_num, u16 *queue_size); + void (*get_net_stats)(void *priv, struct nbl_stats *queue_stats); + void (*get_rxfh_indir_size)(void *priv, u16 vsi_id, + u32 *rxfh_indir_size); + int (*set_rxfh_indir)(void *priv, u16 vsi_id, const u32 *indir, + u32 indir_size); + int (*get_port_attributes)(void *priv); + int (*enable_port)(void *priv, bool enable); + void (*recv_port_notify)(void *priv); + int (*get_link_state)(void *priv, u8 eth_id, + struct nbl_eth_link_info *eth_link_info); + int (*set_eth_mac_addr)(void *priv, u8 *mac, u8 eth_id); + int (*process_abnormal_event)(void *priv, + struct nbl_abnormal_event_info *info); + int (*set_wol)(void *priv, u8 eth_id, bool enable); + void (*adapt_desc_gother)(void *priv); + void (*flr_clear_net)(void *priv, u16 vfid); + void (*flr_clear_queues)(void *priv, u16 vfid); + + void (*flr_clear_flows)(void *priv, u16 vfid); + void (*flr_clear_interrupt)(void *priv, u16 vfid); + + u16 (*covert_vfid_to_vsi_id)(void *priv, u16 vfid); + void (*unmask_all_interrupts)(void *priv); + void (*keep_alive)(void *priv); + void (*cfg_txrx_vlan)(void *priv, u16 vlan_tci, u16 vlan_proto, + u8 vsi_index); + + u8 __iomem *(*get_hw_addr)(void *priv, size_t *size); + u16 (*get_function_id)(void *priv, u16 vsi_id); + void (*get_real_bdf)(void *priv, u16 vsi_id, u8 *bus, u8 *dev, + u8 *function); + + bool (*check_fw_heartbeat)(void *priv); + bool (*check_fw_reset)(void *priv); + + int (*set_sfp_state)(void *priv, u8 eth_id, u8 state); + int (*passthrough_fw_cmd)(void *priv, + struct nbl_passthrough_fw_cmd *param, + struct nbl_passthrough_fw_cmd *result); + int (*update_ring_num)(void *priv); + int (*set_ring_num)(void *priv, + struct nbl_cmd_net_ring_num *param); + int (*get_part_number)(void *priv, char *part_number); + int (*get_serial_number)(void *priv, char *serial_number); + + int (*get_board_id)(void *priv); + + bool (*get_product_fix_cap)(void *priv, enum nbl_fix_cap_type cap_type); + + void (*dummy_func)(void *priv); + + dma_addr_t (*restore_abnormal_ring)(void *priv, int ring_index, + int type); + int (*restart_abnormal_ring)(void *priv, int ring_index, int type); + int (*stop_abnormal_sw_queue)(void *priv, u16 local_queue_id, int type); + int (*stop_abnormal_hw_queue)(void *priv, u16 vsi_id, + u16 local_queue_id, int type); + u16 (*get_vf_function_id)(void *priv, u16 vsi_id, int vf_id); + u16 (*get_vf_vsi_id)(void *priv, u16 vsi_id, int vf_id); + bool (*check_vf_is_active)(void *priv, u16 func_id); + int (*get_ustore_total_pkt_drop_stats)(void *priv, u8 eth_id, + struct nbl_ustore_stats *stat); + + int (*get_link_forced)(void *priv, u16 vsi_id); + int (*set_tx_rate)(void *priv, u16 func_id, int tx_rate, int burst); + int (*set_rx_rate)(void *priv, u16 func_id, int rx_rate, int burst); + + void (*register_dev_name)(void *priv, u16 vsi_id, char *name); + void (*get_dev_name)(void *priv, u16 vsi_id, char *name); + + void (*set_hw_status)(void *priv, enum nbl_hw_status hw_status); + void (*get_active_func_bitmaps)(void *priv, unsigned long *bitmap, + int max_func); + + int (*check_flow_table_spec)(void *priv, u16 vlan_cnt, u16 unicast_cnt, + u16 multicast_cnt); +}; + +struct nbl_dispatch_ops_tbl { + struct nbl_dispatch_ops *ops; + void *priv; +}; + +int nbl_disp_init(void *p, struct nbl_init_param *param); +void nbl_disp_remove(void *p); + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index 9cee11498e9f..fda55e97d743 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -76,7 +76,13 @@ struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, ret = product_base_ops->res_init(adapter, param); if (ret) goto res_init_fail; + + ret = nbl_disp_init(adapter, param); + if (ret) + goto disp_init_fail; return adapter; +disp_init_fail: + product_base_ops->res_remove(adapter); res_init_fail: product_base_ops->chan_remove(adapter); chan_init_fail: @@ -93,6 +99,7 @@ void nbl_core_remove(struct nbl_adapter *adapter) dev = NBL_ADAP_TO_DEV(adapter); product_base_ops = NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter); + nbl_disp_remove(adapter); product_base_ops->res_remove(adapter); product_base_ops->chan_remove(adapter); product_base_ops->hw_remove(adapter); -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 12/15] net/nebula-matrix: add Service layer definitions and implementation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (10 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 11/15] net/nebula-matrix: add Dispatch layer " illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 13/15] net/nebula-matrix: add Dev init,remove operation illusion.wang ` (3 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list Service Layer functions include: 1.queue and Ring Management 2.Network Device Operations 3.VLAN and SubMAC Management 4.Interrupt and IRQ Management 5.Flow and Filter Management 6.VF management 7.link state and SFP Management 8.Statistics and Monitoring 9.Firmware and Device Management Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 1 + .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 4 + .../nebula-matrix/nbl/nbl_core/nbl_service.c | 136 +++++++++++ .../nebula-matrix/nbl/nbl_core/nbl_service.h | 214 ++++++++++++++++++ .../nbl/nbl_include/nbl_def_service.h | 24 ++ .../nbl/nbl_include/nbl_include.h | 21 ++ .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 7 + 7 files changed, 407 insertions(+) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index dba7bf27be46..8a02d5515e67 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -18,6 +18,7 @@ nbl_core-objs += nbl_common/nbl_common.o \ nbl_hw/nbl_vsi.o \ nbl_hw/nbl_adminq.o \ nbl_core/nbl_dispatch.o \ + nbl_core/nbl_service.o \ nbl_main.o # Provide include files diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index d32a8c4a7519..19dce6782d57 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -13,6 +13,7 @@ #include "nbl_def_hw.h" #include "nbl_def_resource.h" #include "nbl_def_dispatch.h" +#include "nbl_def_service.h" #include "nbl_def_common.h" #define NBL_ADAP_TO_PDEV(adapter) ((adapter)->pdev) @@ -23,10 +24,12 @@ #define NBL_ADAP_TO_HW_MGT(adapter) ((adapter)->core.hw_mgt) #define NBL_ADAP_TO_RES_MGT(adapter) ((adapter)->core.res_mgt) #define NBL_ADAP_TO_DISP_MGT(adapter) ((adapter)->core.disp_mgt) +#define NBL_ADAP_TO_SERV_MGT(adapter) ((adapter)->core.serv_mgt) #define NBL_ADAP_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt) #define NBL_ADAP_TO_HW_OPS_TBL(adapter) ((adapter)->intf.hw_ops_tbl) #define NBL_ADAP_TO_RES_OPS_TBL(adapter) ((adapter)->intf.resource_ops_tbl) #define NBL_ADAP_TO_DISP_OPS_TBL(adapter) ((adapter)->intf.dispatch_ops_tbl) +#define NBL_ADAP_TO_SERV_OPS_TBL(adapter) ((adapter)->intf.service_ops_tbl) #define NBL_ADAP_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl) #define NBL_ADAPTER_TO_RES_PT_OPS(adapter) \ @@ -71,6 +74,7 @@ struct nbl_interface { struct nbl_hw_ops_tbl *hw_ops_tbl; struct nbl_resource_ops_tbl *resource_ops_tbl; struct nbl_dispatch_ops_tbl *dispatch_ops_tbl; + struct nbl_service_ops_tbl *service_ops_tbl; struct nbl_channel_ops_tbl *channel_ops_tbl; }; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c new file mode 100644 index 000000000000..c4ce5da65d8f --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c @@ -0,0 +1,136 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ +#include <crypto/hash.h> +#include <linux/etherdevice.h> +#include <linux/ip.h> +#include <net/ipv6.h> +#include <linux/sctp.h> +#include <linux/rtnetlink.h> +#include <linux/if_vlan.h> + +#include "nbl_service.h" +static void nbl_serv_setup_flow_mgt(struct nbl_serv_flow_mgt *flow_mgt) +{ + int i = 0; + + INIT_LIST_HEAD(&flow_mgt->vlan_list); + for (i = 0; i < NBL_SUBMAC_MAX; i++) + INIT_LIST_HEAD(&flow_mgt->submac_list[i]); +} + +static struct nbl_service_ops serv_ops = { +}; + +/* Structure starts here, adding an op should not modify anything below */ +static int nbl_serv_setup_serv_mgt(struct nbl_common_info *common, + struct nbl_service_mgt **serv_mgt) +{ + struct device *dev; + + dev = NBL_COMMON_TO_DEV(common); + *serv_mgt = + devm_kzalloc(dev, sizeof(struct nbl_service_mgt), GFP_KERNEL); + if (!*serv_mgt) + return -ENOMEM; + + NBL_SERV_MGT_TO_COMMON(*serv_mgt) = common; + nbl_serv_setup_flow_mgt(NBL_SERV_MGT_TO_FLOW_MGT(*serv_mgt)); + + return 0; +} + +static void nbl_serv_remove_serv_mgt(struct nbl_common_info *common, + struct nbl_service_mgt **serv_mgt) +{ + struct device *dev = NBL_COMMON_TO_DEV(common); + struct nbl_serv_ring_mgt *ring_mgt = + NBL_SERV_MGT_TO_RING_MGT(*serv_mgt); + + if (ring_mgt->rss_indir_user) + devm_kfree(dev, ring_mgt->rss_indir_user); + devm_kfree(dev, *serv_mgt); + *serv_mgt = NULL; +} + +static void nbl_serv_remove_ops(struct device *dev, + struct nbl_service_ops_tbl **serv_ops_tbl) +{ + devm_kfree(dev, *serv_ops_tbl); + *serv_ops_tbl = NULL; +} + +static int nbl_serv_setup_ops(struct device *dev, + struct nbl_service_ops_tbl **serv_ops_tbl, + struct nbl_service_mgt *serv_mgt) +{ + *serv_ops_tbl = devm_kzalloc(dev, sizeof(struct nbl_service_ops_tbl), + GFP_KERNEL); + if (!*serv_ops_tbl) + return -ENOMEM; + + (*serv_ops_tbl)->ops = &serv_ops; + (*serv_ops_tbl)->priv = serv_mgt; + + return 0; +} + +int nbl_serv_init(void *p, struct nbl_init_param *param) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct device *dev; + struct nbl_common_info *common; + struct nbl_service_mgt **serv_mgt; + struct nbl_service_ops_tbl **serv_ops_tbl; + struct nbl_dispatch_ops_tbl *disp_ops_tbl; + struct nbl_dispatch_ops *disp_ops; + struct nbl_channel_ops_tbl *chan_ops_tbl; + int ret = 0; + + dev = NBL_ADAP_TO_DEV(adapter); + common = NBL_ADAP_TO_COMMON(adapter); + serv_mgt = (struct nbl_service_mgt **)&NBL_ADAP_TO_SERV_MGT(adapter); + serv_ops_tbl = &NBL_ADAP_TO_SERV_OPS_TBL(adapter); + disp_ops_tbl = NBL_ADAP_TO_DISP_OPS_TBL(adapter); + chan_ops_tbl = NBL_ADAP_TO_CHAN_OPS_TBL(adapter); + disp_ops = disp_ops_tbl->ops; + + ret = nbl_serv_setup_serv_mgt(common, serv_mgt); + if (ret) + goto setup_mgt_fail; + + ret = nbl_serv_setup_ops(dev, serv_ops_tbl, *serv_mgt); + if (ret) + goto setup_ops_fail; + + (*serv_mgt)->disp_ops_tbl = disp_ops_tbl; + (*serv_mgt)->chan_ops_tbl = chan_ops_tbl; + disp_ops->get_resource_pt_ops(disp_ops_tbl->priv, + &(*serv_ops_tbl)->pt_ops); + + return 0; + +setup_ops_fail: + nbl_serv_remove_serv_mgt(common, serv_mgt); +setup_mgt_fail: + return ret; +} + +void nbl_serv_remove(void *p) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct device *dev; + struct nbl_common_info *common; + struct nbl_service_mgt **serv_mgt; + struct nbl_service_ops_tbl **serv_ops_tbl; + + dev = NBL_ADAP_TO_DEV(adapter); + common = NBL_ADAP_TO_COMMON(adapter); + serv_mgt = (struct nbl_service_mgt **)&NBL_ADAP_TO_SERV_MGT(adapter); + serv_ops_tbl = &NBL_ADAP_TO_SERV_OPS_TBL(adapter); + + nbl_serv_remove_ops(dev, serv_ops_tbl); + nbl_serv_remove_serv_mgt(common, serv_mgt); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h new file mode 100644 index 000000000000..457eac6fb3a7 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h @@ -0,0 +1,214 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_SERVICE_H_ +#define _NBL_SERVICE_H_ + +#include <linux/mm.h> +#include <linux/ptr_ring.h> +#include "nbl_core.h" + +#define NBL_SERV_MGT_TO_COMMON(serv_mgt) ((serv_mgt)->common) +#define NBL_SERV_MGT_TO_DEV(serv_mgt) \ + NBL_COMMON_TO_DEV(NBL_SERV_MGT_TO_COMMON(serv_mgt)) +#define NBL_SERV_MGT_TO_RING_MGT(serv_mgt) (&(serv_mgt)->ring_mgt) +#define NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt) (&(serv_mgt)->flow_mgt) +#define NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt) ((serv_mgt)->net_resource_mgt) + +#define NBL_SERV_MGT_TO_DISP_OPS_TBL(serv_mgt) ((serv_mgt)->disp_ops_tbl) +#define NBL_SERV_MGT_TO_DISP_OPS(serv_mgt) \ + (NBL_SERV_MGT_TO_DISP_OPS_TBL(serv_mgt)->ops) +#define NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt) \ + (NBL_SERV_MGT_TO_DISP_OPS_TBL(serv_mgt)->priv) + +#define NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt) ((serv_mgt)->chan_ops_tbl) +#define NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt) \ + (NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt)->ops) +#define NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt) \ + (NBL_SERV_MGT_TO_CHAN_OPS_TBL(serv_mgt)->priv) + +#define NBL_DEFAULT_VLAN_ID 0 +#define NBL_HW_STATS_PERIOD_SECONDS 5 +#define NBL_HW_STATS_RX_RATE_THRESHOLD (1000) /* 1k pps */ + +#define NBL_TX_TSO_MSS_MIN (256) +#define NBL_TX_TSO_MSS_MAX (16383) +#define NBL_TX_TSO_L2L3L4_HDR_LEN_MIN (42) +#define NBL_TX_TSO_L2L3L4_HDR_LEN_MAX (128) +#define NBL_TX_CHECKSUM_OFFLOAD_L2L3L4_HDR_LEN_MAX (255) + +#define SET_DPORT_TYPE_VSI_HOST (0) +#define SET_DPORT_TYPE_VSI_ECPU (1) +#define SET_DPORT_TYPE_ETH_LAG (2) +#define SET_DPORT_TYPE_SP_PORT (3) + +/* primary vlan in vlan list */ +#define NBL_NO_TRUST_MAX_VLAN 9 +/* primary mac not in submac list */ +#define NBL_NO_TRUST_MAX_MAC 12 + +struct nbl_serv_ring { + dma_addr_t dma; + u16 index; + u16 local_queue_id; + u16 global_queue_id; + bool need_recovery; + u32 tx_timeout_count; +}; + +struct nbl_serv_vector { + char name[32]; + cpumask_t cpumask; + struct net_device *netdev; + struct nbl_napi_struct *nbl_napi; + struct nbl_serv_ring *tx_ring; + struct nbl_serv_ring *rx_ring; + u8 __iomem *irq_enable_base; + u32 irq_data; + u16 local_vec_id; + u16 global_vec_id; +}; + +struct nbl_serv_ring_vsi_info { + u16 vsi_index; + u16 vsi_id; + u16 ring_offset; + u16 ring_num; + u16 active_ring_num; + bool itr_dynamic; + bool started; +}; + +struct nbl_serv_ring_mgt { + struct nbl_serv_ring *tx_rings; + struct nbl_serv_ring *rx_rings; + struct nbl_serv_vector *vectors; + struct nbl_serv_ring_vsi_info vsi_info[NBL_VSI_MAX]; + u32 *rss_indir_user; + u16 tx_desc_num; + u16 rx_desc_num; + u16 tx_ring_num; + u16 rx_ring_num; + u16 active_ring_num; + bool net_msix_mask_en; +}; + +struct nbl_serv_vlan_node { + struct list_head node; + u16 vid; + // primary_mac_effective means base mac + vlan ok + u16 primary_mac_effective; + // sub_mac_effective means sub mac + vlan ok + u16 sub_mac_effective; + u16 ref_cnt; +}; + +struct nbl_serv_submac_node { + struct list_head node; + u8 mac[ETH_ALEN]; + // effective means this submac + allvlan flowrule effective + u16 effective; +}; + +enum { + NBL_PROMISC = 0, + NBL_ALLMULTI = 1, +}; + +enum { + NBL_SUBMAC_UNICAST = 0, + NBL_SUBMAC_MULTI = 1, + NBL_SUBMAC_MAX = 2 +}; + +struct nbl_serv_flow_mgt { + struct list_head vlan_list; + struct list_head submac_list[NBL_SUBMAC_MAX]; + u16 vid; + u8 mac[ETH_ALEN]; + u8 eth; + bool trusted_en; + bool trusted_update; + u16 vlan_list_cnt; + u16 active_submac_list; + u16 submac_list_cnt; + u16 unicast_mac_cnt; + u16 multi_mac_cnt; + u16 promisc; + bool force_promisc; + bool ucast_flow_en; + bool mcast_flow_en; + bool pending_async_work; +}; + +struct nbl_mac_filter { + struct list_head list; + u8 macaddr[ETH_ALEN]; +}; + +struct nbl_serv_netdev_ops { + void *pf_netdev_ops; +}; + +struct nbl_serv_net_resource_mgt { + struct nbl_service_mgt *serv_mgt; + struct net_device *netdev; + struct work_struct rx_mode_async; + struct work_struct tx_timeout; + struct work_struct update_link_state; + struct work_struct update_vlan; + struct delayed_work watchdog_task; + struct timer_list serv_timer; + unsigned long serv_timer_period; + + struct list_head tmp_add_filter_list; + struct list_head tmp_del_filter_list; + struct nbl_serv_netdev_ops netdev_ops; + u16 curr_promiscuout_mode; + u16 num_net_msix; + bool update_submac; + int num_vfs; + int total_vfs; + + /* stats for netdev */ + u64 get_stats_jiffies; + struct nbl_stats stats; + struct nbl_hw_stats hw_stats; + unsigned long hw_stats_jiffies; + unsigned long hw_stats_period; + u32 configured_speed; + u32 configured_fec; + int link_forced; + + u16 vlan_tci; + u16 vlan_proto; + int max_tx_rate; +}; + +struct nbl_service_mgt { + struct nbl_common_info *common; + struct nbl_dispatch_ops_tbl *disp_ops_tbl; + struct nbl_channel_ops_tbl *chan_ops_tbl; + struct nbl_serv_ring_mgt ring_mgt; + struct nbl_serv_flow_mgt flow_mgt; + struct nbl_serv_net_resource_mgt *net_resource_mgt; + +}; + +struct nbl_serv_notify_vlan_param { + u16 vlan_tci; + u16 vlan_proto; +}; + +int nbl_serv_netdev_open(struct net_device *netdev); +int nbl_serv_netdev_stop(struct net_device *netdev); +int nbl_serv_vsi_open(void *priv, struct net_device *netdev, u16 vsi_index, + u16 real_qps, bool use_napi); +int nbl_serv_vsi_stop(void *priv, u16 vsi_index); +void nbl_serv_cpu_affinity_init(void *priv, u16 rings_num); +u16 nbl_serv_get_vf_function_id(void *priv, int vf_id); + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h new file mode 100644 index 000000000000..dc261fda3aa5 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEF_SERVICE_H_ +#define _NBL_DEF_SERVICE_H_ + +#include "nbl_include.h" + +struct nbl_service_ops { +}; + +struct nbl_service_ops_tbl { + struct nbl_resource_pt_ops pt_ops; + struct nbl_service_ops *ops; + void *priv; +}; + +int nbl_serv_init(void *priv, struct nbl_init_param *param); +void nbl_serv_remove(void *priv); + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index 173ff2ebef81..af2439efb5db 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -155,6 +155,11 @@ struct nbl_queue_cfg_param { u16 half_offload_en; }; +struct nbl_msix_info_param { + u16 msix_num; + struct msix_entry *msix_entries; +}; + struct nbl_queue_stats { u64 packets; u64 bytes; @@ -232,6 +237,15 @@ struct nbl_notify_param { u16 tail_ptr; }; +struct nbl_common_irq_num { + int mbx_irq_num; +}; + +struct nbl_ctrl_irq_num { + int adminq_irq_num; + int abnormal_irq_num; +}; + enum nbl_port_type { NBL_PORT_TYPE_UNKNOWN = 0, NBL_PORT_TYPE_FIBRE, @@ -429,6 +443,13 @@ enum nbl_performance_mode { NBL_QUIRKS_UVN_PREFETCH_ALIGN, }; +struct nbl_vsi_param { + u16 vsi_id; + u16 queue_offset; + u16 queue_num; + u8 index; +}; + struct nbl_ring_param { u16 tx_ring_num; u16 rx_ring_num; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index fda55e97d743..c6b346e4ce47 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -80,7 +80,13 @@ struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, ret = nbl_disp_init(adapter, param); if (ret) goto disp_init_fail; + + ret = nbl_serv_init(adapter, param); + if (ret) + goto serv_init_fail; return adapter; +serv_init_fail: + nbl_disp_remove(adapter); disp_init_fail: product_base_ops->res_remove(adapter); res_init_fail: @@ -99,6 +105,7 @@ void nbl_core_remove(struct nbl_adapter *adapter) dev = NBL_ADAP_TO_DEV(adapter); product_base_ops = NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter); + nbl_serv_remove(adapter); nbl_disp_remove(adapter); product_base_ops->res_remove(adapter); product_base_ops->chan_remove(adapter); -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 13/15] net/nebula-matrix: add Dev init,remove operation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (11 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 12/15] net/nebula-matrix: add Service " illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 14/15] net/nebula-matrix: add Dev start, stop operation illusion.wang ` (2 subsequent siblings) 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list some important steps in dev init: 1.init common dev:setup mailbox channel queue,alloc mbx task,alloc reset task,register mailbox chan task, register common irq and etc. 2.init ctrl dev: register ctrl irq, init chip, start_mgt_flow, set chan qinfo, setup adminq channel queue, register adminq chan task, alloc some task and etc. 3.init net dev: build, register and set up vsi, register net irq and etc. Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 1 + .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 18 + .../nebula-matrix/nbl/nbl_core/nbl_dev.c | 1428 +++++++++++++++++ .../nebula-matrix/nbl/nbl_core/nbl_dev.h | 250 +++ .../nebula-matrix/nbl/nbl_core/nbl_service.c | 1356 ++++++++++++++++ .../nebula-matrix/nbl/nbl_core/nbl_service.h | 4 +- .../nbl/nbl_include/nbl_def_common.h | 9 + .../nbl/nbl_include/nbl_def_dev.h | 26 + .../nbl/nbl_include/nbl_def_service.h | 72 + .../nbl/nbl_include/nbl_include.h | 52 + .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 42 +- 11 files changed, 3256 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index 8a02d5515e67..062ff1ffb964 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -19,6 +19,7 @@ nbl_core-objs += nbl_common/nbl_common.o \ nbl_hw/nbl_adminq.o \ nbl_core/nbl_dispatch.o \ nbl_core/nbl_service.o \ + nbl_core/nbl_dev.o \ nbl_main.o # Provide include files diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index 19dce6782d57..685d9f1831be 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -14,6 +14,7 @@ #include "nbl_def_resource.h" #include "nbl_def_dispatch.h" #include "nbl_def_service.h" +#include "nbl_def_dev.h" #include "nbl_def_common.h" #define NBL_ADAP_TO_PDEV(adapter) ((adapter)->pdev) @@ -25,11 +26,13 @@ #define NBL_ADAP_TO_RES_MGT(adapter) ((adapter)->core.res_mgt) #define NBL_ADAP_TO_DISP_MGT(adapter) ((adapter)->core.disp_mgt) #define NBL_ADAP_TO_SERV_MGT(adapter) ((adapter)->core.serv_mgt) +#define NBL_ADAP_TO_DEV_MGT(adapter) ((adapter)->core.dev_mgt) #define NBL_ADAP_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt) #define NBL_ADAP_TO_HW_OPS_TBL(adapter) ((adapter)->intf.hw_ops_tbl) #define NBL_ADAP_TO_RES_OPS_TBL(adapter) ((adapter)->intf.resource_ops_tbl) #define NBL_ADAP_TO_DISP_OPS_TBL(adapter) ((adapter)->intf.dispatch_ops_tbl) #define NBL_ADAP_TO_SERV_OPS_TBL(adapter) ((adapter)->intf.service_ops_tbl) +#define NBL_ADAP_TO_DEV_OPS_TBL(adapter) ((adapter)->intf.dev_ops_tbl) #define NBL_ADAP_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl) #define NBL_ADAPTER_TO_RES_PT_OPS(adapter) \ @@ -70,11 +73,25 @@ enum { NBL_CAP_IS_OCP_BIT, }; +enum nbl_adapter_state { + NBL_DOWN, + NBL_RESETTING, + NBL_RESET_REQUESTED, + NBL_INITING, + NBL_INIT_FAILED, + NBL_RUNNING, + NBL_TESTING, + NBL_USER, + NBL_FATAL_ERR, + NBL_STATE_NBITS +}; + struct nbl_interface { struct nbl_hw_ops_tbl *hw_ops_tbl; struct nbl_resource_ops_tbl *resource_ops_tbl; struct nbl_dispatch_ops_tbl *dispatch_ops_tbl; struct nbl_service_ops_tbl *service_ops_tbl; + struct nbl_dev_ops_tbl *dev_ops_tbl; struct nbl_channel_ops_tbl *channel_ops_tbl; }; @@ -94,6 +111,7 @@ struct nbl_adapter { struct nbl_common_info common; struct nbl_product_base_ops *product_base_ops; struct nbl_init_param init_param; + DECLARE_BITMAP(state, NBL_STATE_NBITS); }; struct nbl_netdev_priv { diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c new file mode 100644 index 000000000000..6b797d7ddbf8 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c @@ -0,0 +1,1428 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include <linux/rtc.h> +#include <linux/etherdevice.h> +#include <linux/rtnetlink.h> +#include <linux/if_vlan.h> + +#include "nbl_dev.h" + +static struct nbl_dev_board_id_table board_id_table; +static struct nbl_dev_ops dev_ops; + +static void nbl_dev_handle_fatal_err(struct nbl_dev_mgt *dev_mgt); +/* ---------- Basic functions ---------- */ +static int nbl_dev_alloc_board_id(struct nbl_dev_board_id_table *index_table, + u32 board_key) +{ + int i = 0; + + for (i = 0; i < NBL_DEV_BOARD_ID_MAX; i++) { + if (index_table->entry[i].board_key == board_key) { + index_table->entry[i].refcount++; + return i; + } + } + + for (i = 0; i < NBL_DEV_BOARD_ID_MAX; i++) { + if (!index_table->entry[i].valid) { + index_table->entry[i].board_key = board_key; + index_table->entry[i].refcount++; + index_table->entry[i].valid = true; + return i; + } + } + + return -ENOSPC; +} + +static void nbl_dev_free_board_id(struct nbl_dev_board_id_table *index_table, + u32 board_key) +{ + int i = 0; + + for (i = 0; i < NBL_DEV_BOARD_ID_MAX; i++) { + if (index_table->entry[i].board_key == board_key && + index_table->entry[i].valid) { + index_table->entry[i].refcount--; + break; + } + } + + if (i != NBL_DEV_BOARD_ID_MAX && !index_table->entry[i].refcount) + memset(&index_table->entry[i], 0, + sizeof(index_table->entry[i])); +} + +/* ---------- Interrupt config ---------- */ +static void nbl_dev_handle_abnormal_event(struct work_struct *work) +{ + struct nbl_task_info *task_info = container_of(work, + struct nbl_task_info, + clean_abnormal_irq_task); + struct nbl_dev_mgt *dev_mgt = task_info->dev_mgt; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->process_abnormal_event(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +} + +static void nbl_dev_register_common_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_common_irq_num irq_num = { 0 }; + + serv_ops->get_common_irq_num(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + &irq_num); + msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num = irq_num.mbx_irq_num; +} + +static void nbl_dev_register_net_irq(struct nbl_dev_mgt *dev_mgt, u16 queue_num) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + + msix_info->serv_info[NBL_MSIX_NET_TYPE].num = queue_num; + msix_info->serv_info[NBL_MSIX_NET_TYPE].hw_self_mask_en = 1; +} + +static void nbl_dev_register_ctrl_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_ctrl_irq_num irq_num = {0}; + + serv_ops->get_ctrl_irq_num(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), &irq_num); + + msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].num = + irq_num.abnormal_irq_num; + msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].num = + irq_num.adminq_irq_num; +} + +/* ---------- Channel config ---------- */ +static int nbl_dev_setup_chan_qinfo(struct nbl_dev_mgt *dev_mgt, u8 chan_type) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + void *priv = NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt); + int ret = 0; + + if (!chan_ops->check_queue_exist(priv, chan_type)) + return 0; + + ret = chan_ops->cfg_chan_qinfo_map_table(priv, chan_type); + if (ret) + dev_err(dev, "setup chan:%d, qinfo map table failed\n", + chan_type); + + return ret; +} + +static int nbl_dev_setup_chan_queue(struct nbl_dev_mgt *dev_mgt, u8 chan_type) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + void *priv = NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt); + int ret = 0; + + if (chan_ops->check_queue_exist(priv, chan_type)) + ret = chan_ops->setup_queue(priv, chan_type); + + return ret; +} + +static int nbl_dev_remove_chan_queue(struct nbl_dev_mgt *dev_mgt, u8 chan_type) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + void *priv = NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt); + int ret = 0; + + if (chan_ops->check_queue_exist(priv, chan_type)) + ret = chan_ops->teardown_queue(priv, chan_type); + + return ret; +} + +static void nbl_dev_remove_chan_keepalive(struct nbl_dev_mgt *dev_mgt, + u8 chan_type) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + + if (chan_ops->check_queue_exist(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + chan_type)) + chan_ops->remove_keepalive(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + chan_type); +} + +static void nbl_dev_register_chan_task(struct nbl_dev_mgt *dev_mgt, + u8 chan_type, struct work_struct *task) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + + if (chan_ops->check_queue_exist(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + chan_type)) + chan_ops->register_chan_task(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + chan_type, task); +} + +/* ---------- Tasks config ---------- */ +static void nbl_dev_clean_mailbox_task(struct work_struct *work) +{ + struct nbl_dev_common *common_dev = + container_of(work, struct nbl_dev_common, clean_mbx_task); + struct nbl_dev_mgt *dev_mgt = common_dev->dev_mgt; + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + + chan_ops->clean_queue_subtask(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_TYPE_MAILBOX); +} + +static void nbl_dev_prepare_reset_task(struct work_struct *work) +{ + struct nbl_reset_task_info *task_info = + container_of(work, struct nbl_reset_task_info, task); + struct nbl_dev_common *common_dev = + container_of(task_info, struct nbl_dev_common, reset_task); + struct nbl_dev_mgt *dev_mgt = common_dev->dev_mgt; + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_chan_send_info chan_send; + + serv_ops->netdev_stop(dev_mgt->net_dev->netdev); + netif_device_detach(dev_mgt->net_dev->netdev); + nbl_dev_remove_chan_keepalive(dev_mgt, NBL_CHAN_TYPE_MAILBOX); + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_ACK_RESET_EVENT, NULL, 0, NULL, 0, 0); + /* notify ctrl dev, finish reset event process */ + chan_ops->send_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), &chan_send); + chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_ABNORMAL, NBL_CHAN_TYPE_MAILBOX, + true); + + /* sleep to avoid send_msg is running */ + usleep_range(10, 20); + + /* ctrl dev must shutdown phy reg read/write after ctrl dev + *has notify emp shutdown dev + */ + if (!NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt)) + serv_ops->set_hw_status(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_HW_FATAL_ERR); +} + +static void nbl_dev_clean_adminq_task(struct work_struct *work) +{ + struct nbl_task_info *task_info = + container_of(work, struct nbl_task_info, clean_adminq_task); + struct nbl_dev_mgt *dev_mgt = task_info->dev_mgt; + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + + chan_ops->clean_queue_subtask(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_TYPE_ADMINQ); +} + +static void nbl_dev_fw_heartbeat_task(struct work_struct *work) +{ + struct nbl_task_info *task_info = + container_of(work, struct nbl_task_info, fw_hb_task); + struct nbl_dev_mgt *dev_mgt = task_info->dev_mgt; + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + + if (task_info->fw_resetting) + return; + + if (!serv_ops->check_fw_heartbeat(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt))) { + dev_notice(NBL_COMMON_TO_DEV(common), "FW reset detected"); + task_info->fw_resetting = true; + chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_ABNORMAL, + NBL_CHAN_TYPE_ADMINQ, true); + nbl_common_q_dwork(&task_info->fw_reset_task, + MSEC_PER_SEC, true); + } +} + +static void nbl_dev_fw_reset_task(struct work_struct *work) +{ +} + +static void nbl_dev_adapt_desc_gother_task(struct work_struct *work) +{ + struct nbl_task_info *task_info = container_of(work, + struct nbl_task_info, + adapt_desc_gother_task); + struct nbl_dev_mgt *dev_mgt = task_info->dev_mgt; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->adapt_desc_gother(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +} + +static void nbl_dev_recovery_abnormal_task(struct work_struct *work) +{ + struct nbl_task_info *task_info = container_of(work, + struct nbl_task_info, + recovery_abnormal_task); + struct nbl_dev_mgt *dev_mgt = task_info->dev_mgt; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->recovery_abnormal(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +} + +static void nbl_dev_ctrl_reset_task(struct work_struct *work) +{ + struct nbl_task_info *task_info = + container_of(work, struct nbl_task_info, reset_task); + struct nbl_dev_mgt *dev_mgt = task_info->dev_mgt; + + nbl_dev_handle_fatal_err(dev_mgt); +} + +static void nbl_dev_ctrl_task_schedule(struct nbl_task_info *task_info) +{ + struct nbl_dev_mgt *dev_mgt = task_info->dev_mgt; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_FW_HB_CAP)) + nbl_common_queue_work(&task_info->fw_hb_task, true); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_ADAPT_DESC_GOTHER)) + nbl_common_queue_work(&task_info->adapt_desc_gother_task, true); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_RECOVERY_ABN_STATUS)) + nbl_common_queue_work(&task_info->recovery_abnormal_task, true); +} + +static void nbl_dev_ctrl_task_timer(struct timer_list *t) +{ + struct nbl_task_info *task_info = + container_of(t, struct nbl_task_info, serv_timer); + + mod_timer(&task_info->serv_timer, + round_jiffies(task_info->serv_timer_period + jiffies)); + nbl_dev_ctrl_task_schedule(task_info); +} + +static void nbl_dev_chan_notify_flr_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)priv; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + u16 vfid; + + vfid = *(u16 *)data; + serv_ops->process_flr(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vfid); +} + +static void nbl_dev_ctrl_register_flr_chan_msg(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + if (!serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_PROCESS_FLR_CAP)) + return; + + chan_ops->register_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_MSG_ADMINQ_FLR_NOTIFY, + nbl_dev_chan_notify_flr_resp, dev_mgt); +} + +static struct nbl_dev_temp_alarm_info temp_alarm_info[NBL_TEMP_STATUS_MAX] = { + { LOGLEVEL_WARNING, "High temperature on sensors0 resumed.\n" }, + { LOGLEVEL_WARNING, + "High temperature on sensors0 observed, security(WARNING).\n" }, + { LOGLEVEL_CRIT, + "High temperature on sensors0 observed, security(CRITICAL).\n" }, + { LOGLEVEL_EMERG, + "High temperature on sensors0 observed, security(EMERGENCY).\n" }, +}; + +static void nbl_dev_handle_temp_ext(struct nbl_dev_mgt *dev_mgt, u8 *data, + u16 data_len) +{ + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_dev_ctrl *ctrl_dev = NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + enum nbl_dev_temp_status old_temp_status = ctrl_dev->temp_status; + enum nbl_dev_temp_status new_temp_status = NBL_TEMP_STATUS_NORMAL; + u16 temp = (u16)*data; + u64 uptime = 0; + + /* no resume if temp exceed NBL_TEMP_EMERG_THRESHOLD, even if the temp + * resume nomal.Because the hw has shutdown. + */ + if (old_temp_status == NBL_TEMP_STATUS_EMERG) + return; + + /* if temp in (85-105) and not in normal_status, no resume to avoid + * alarm oscillate + */ + if (temp > NBL_TEMP_NOMAL_THRESHOLD && + temp < NBL_TEMP_WARNING_THRESHOLD && + old_temp_status > NBL_TEMP_STATUS_NORMAL) + return; + + if (temp >= NBL_TEMP_WARNING_THRESHOLD && + temp < NBL_TEMP_CRIT_THRESHOLD) + new_temp_status = NBL_TEMP_STATUS_WARNING; + else if (temp >= NBL_TEMP_CRIT_THRESHOLD && + temp < NBL_TEMP_EMERG_THRESHOLD) + new_temp_status = NBL_TEMP_STATUS_CRIT; + else if (temp >= NBL_TEMP_EMERG_THRESHOLD) + new_temp_status = NBL_TEMP_STATUS_EMERG; + + if (new_temp_status == old_temp_status) + return; + + ctrl_dev->temp_status = new_temp_status; + + /* temp fall only alarm when the alarm need to resume */ + if (new_temp_status < old_temp_status && + new_temp_status != NBL_TEMP_STATUS_NORMAL) + return; + + if (data_len > sizeof(u16)) + uptime = *(u64 *)(data + sizeof(u16)); + nbl_log(common, temp_alarm_info[new_temp_status].logvel, "[%llu] %s", + uptime, temp_alarm_info[new_temp_status].alarm_info); + + if (new_temp_status == NBL_TEMP_STATUS_EMERG) { + ctrl_dev->task_info.reset_event = NBL_HW_FATAL_ERR_EVENT; + nbl_common_queue_work(&ctrl_dev->task_info.reset_task, false); + } +} + +static const char *nbl_log_level_name(int level) +{ + switch (level) { + case NBL_EMP_ALERT_LOG_FATAL: + return "FATAL"; + case NBL_EMP_ALERT_LOG_ERROR: + return "ERROR"; + case NBL_EMP_ALERT_LOG_WARNING: + return "WARNING"; + case NBL_EMP_ALERT_LOG_INFO: + return "INFO"; + default: + return "UNKNOWN"; + } +} + +static void nbl_dev_handle_emp_log_ext(struct nbl_dev_mgt *dev_mgt, u8 *data, + u16 data_len) +{ + struct nbl_emp_alert_log_event *log_event = + (struct nbl_emp_alert_log_event *)data; + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + + nbl_log(common, LOGLEVEL_INFO, "[FW][%llu] <%s> %.*s", + log_event->uptime, nbl_log_level_name(log_event->level), + data_len - sizeof(u64) - sizeof(u8), log_event->data); +} + +static void nbl_dev_chan_notify_evt_alert_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)priv; + struct nbl_chan_param_emp_alert_event *alert_param = + (struct nbl_chan_param_emp_alert_event *)data; + + switch (alert_param->type) { + case NBL_EMP_EVENT_TEMP_ALERT: + nbl_dev_handle_temp_ext(dev_mgt, alert_param->data, + alert_param->len); + return; + case NBL_EMP_EVENT_LOG_ALERT: + nbl_dev_handle_emp_log_ext(dev_mgt, alert_param->data, + alert_param->len); + return; + default: + return; + } +} + +static void +nbl_dev_ctrl_register_emp_ext_alert_chan_msg(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + + if (!chan_ops->check_queue_exist(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_TYPE_MAILBOX)) + return; + + chan_ops->register_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_MSG_ADMINQ_EXT_ALERT, + nbl_dev_chan_notify_evt_alert_resp, dev_mgt); +} + +static int nbl_dev_setup_ctrl_dev_task(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_ctrl *ctrl_dev = NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + struct nbl_task_info *task_info = NBL_DEV_CTRL_TO_TASK_INFO(ctrl_dev); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + task_info->dev_mgt = dev_mgt; + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_FW_HB_CAP)) { + nbl_common_alloc_task(&task_info->fw_hb_task, + nbl_dev_fw_heartbeat_task); + task_info->timer_setup = true; + } + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_FW_RESET_CAP)) { + nbl_common_alloc_delayed_task(&task_info->fw_reset_task, + nbl_dev_fw_reset_task); + task_info->timer_setup = true; + } + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_CLEAN_ADMINDQ_CAP)) { + nbl_common_alloc_task(&task_info->clean_adminq_task, + nbl_dev_clean_adminq_task); + task_info->timer_setup = true; + } + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_ADAPT_DESC_GOTHER)) { + nbl_common_alloc_task(&task_info->adapt_desc_gother_task, + nbl_dev_adapt_desc_gother_task); + task_info->timer_setup = true; + } + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_RECOVERY_ABN_STATUS)) { + nbl_common_alloc_task(&task_info->recovery_abnormal_task, + nbl_dev_recovery_abnormal_task); + task_info->timer_setup = true; + } + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_RESET_CTRL_CAP)) + nbl_common_alloc_task(&task_info->reset_task, + &nbl_dev_ctrl_reset_task); + + nbl_common_alloc_task(&task_info->clean_abnormal_irq_task, + nbl_dev_handle_abnormal_event); + + if (task_info->timer_setup) { + timer_setup(&task_info->serv_timer, nbl_dev_ctrl_task_timer, 0); + task_info->serv_timer_period = HZ; + } + + nbl_dev_register_chan_task(dev_mgt, NBL_CHAN_TYPE_ADMINQ, + &task_info->clean_adminq_task); + + return 0; +} + +static void nbl_dev_remove_ctrl_dev_task(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_ctrl *ctrl_dev = NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_task_info *task_info = NBL_DEV_CTRL_TO_TASK_INFO(ctrl_dev); + + nbl_dev_register_chan_task(dev_mgt, NBL_CHAN_TYPE_ADMINQ, NULL); + + nbl_common_release_task(&task_info->clean_abnormal_irq_task); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_FW_RESET_CAP)) + nbl_common_release_delayed_task(&task_info->fw_reset_task); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_FW_HB_CAP)) + nbl_common_release_task(&task_info->fw_hb_task); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_CLEAN_ADMINDQ_CAP)) + nbl_common_release_task(&task_info->clean_adminq_task); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_ADAPT_DESC_GOTHER)) + nbl_common_release_task(&task_info->adapt_desc_gother_task); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_RECOVERY_ABN_STATUS)) + nbl_common_release_task(&task_info->recovery_abnormal_task); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_RESET_CTRL_CAP)) + nbl_common_release_task(&task_info->reset_task); +} + +static int nbl_dev_update_template_config(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt); + + return serv_ops->update_template_config(priv); +} + +/* ---------- Dev init process ---------- */ +static int nbl_dev_setup_common_dev(struct nbl_adapter *adapter, + struct nbl_init_param *param) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_common *common_dev; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt); + int board_id; + + common_dev = devm_kzalloc(NBL_ADAP_TO_DEV(adapter), + sizeof(struct nbl_dev_common), GFP_KERNEL); + if (!common_dev) + return -ENOMEM; + common_dev->dev_mgt = dev_mgt; + + if (nbl_dev_setup_chan_queue(dev_mgt, NBL_CHAN_TYPE_MAILBOX)) + goto setup_chan_fail; + + if (serv_ops->get_product_fix_cap(priv, + NBL_TASK_CLEAN_MAILBOX_CAP)) + nbl_common_alloc_task(&common_dev->clean_mbx_task, + nbl_dev_clean_mailbox_task); + + if (serv_ops->get_product_fix_cap(priv, + NBL_TASK_RESET_CAP)) + nbl_common_alloc_task(&common_dev->reset_task.task, + &nbl_dev_prepare_reset_task); + + if (param->caps.is_nic) { + board_id = serv_ops->get_board_id(priv); + if (board_id < 0) + goto get_board_id_fail; + common->board_id = board_id; + } + + common->vsi_id = serv_ops->get_vsi_id(priv, 0, + NBL_VSI_DATA); + + serv_ops->get_eth_id(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_COMMON_TO_VSI_ID(common), + &NBL_COMMON_TO_ETH_MODE(common), + &NBL_COMMON_TO_ETH_ID(common), + &NBL_COMMON_TO_LOGIC_ETH_ID(common)); + + nbl_dev_register_chan_task(dev_mgt, NBL_CHAN_TYPE_MAILBOX, + &common_dev->clean_mbx_task); + + dev_mgt->common_dev = common_dev; + + nbl_dev_register_common_irq(dev_mgt); + + return 0; + +get_board_id_fail: + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_RESET_CAP)) + nbl_common_release_task(&common_dev->reset_task.task); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_CLEAN_MAILBOX_CAP)) + nbl_common_release_task(&common_dev->clean_mbx_task); +setup_chan_fail: + devm_kfree(NBL_ADAP_TO_DEV(adapter), common_dev); + return -EFAULT; +} + +static void nbl_dev_remove_common_dev(struct nbl_adapter *adapter) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_common *common_dev = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + + if (!common_dev) + return; + + nbl_dev_register_chan_task(dev_mgt, NBL_CHAN_TYPE_MAILBOX, NULL); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_RESET_CAP)) + nbl_common_release_task(&common_dev->reset_task.task); + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_CLEAN_MAILBOX_CAP)) + nbl_common_release_task(&common_dev->clean_mbx_task); + + nbl_dev_remove_chan_queue(dev_mgt, NBL_CHAN_TYPE_MAILBOX); + + devm_kfree(NBL_ADAP_TO_DEV(adapter), common_dev); + dev_mgt->common_dev = NULL; +} + +static int nbl_dev_setup_ctrl_dev(struct nbl_adapter *adapter, + struct nbl_init_param *param) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct device *dev = NBL_ADAP_TO_DEV(adapter); + struct nbl_dev_ctrl *ctrl_dev; + char part_number[50] = ""; + char serial_number[128] = ""; + int i, ret = 0; + u32 board_key; + + int board_id; + + board_key = pci_domain_nr(dev_mgt->common->pdev->bus) << 16 | + dev_mgt->common->pdev->bus->number; + if (param->caps.is_nic) { + board_id = nbl_dev_alloc_board_id(&board_id_table, board_key); + if (board_id < 0) + return -ENOSPC; + NBL_COMMON_TO_BOARD_ID(common) = board_id; + } + + dev_info(dev, "board_key 0x%x alloc board id 0x%x\n", board_key, + NBL_COMMON_TO_BOARD_ID(common)); + + ctrl_dev = devm_kzalloc(dev, sizeof(struct nbl_dev_ctrl), GFP_KERNEL); + if (!ctrl_dev) + goto alloc_fail; + NBL_DEV_CTRL_TO_TASK_INFO(ctrl_dev)->adapter = adapter; + NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt) = ctrl_dev; + + nbl_dev_register_ctrl_irq(dev_mgt); + + ctrl_dev->ctrl_dev_wq1 = + create_singlethread_workqueue("nbl_ctrldev_wq1"); + if (!ctrl_dev->ctrl_dev_wq1) { + dev_err(dev, "Failed to create workqueue nbl_ctrldev_wq1\n"); + goto alloc_wq_fail; + } + + ret = serv_ops->init_chip(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + if (ret) { + dev_err(dev, "ctrl dev chip_init failed\n"); + goto chip_init_fail; + } + + ret = serv_ops->start_mgt_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + if (ret) { + dev_err(dev, "ctrl dev start_mgt_flow failed\n"); + goto mgt_flow_fail; + } + + for (i = 0; i < NBL_CHAN_TYPE_MAX; i++) { + ret = nbl_dev_setup_chan_qinfo(dev_mgt, i); + if (ret) { + dev_err(dev, "ctrl dev setup chan qinfo failed\n"); + goto setup_chan_q_fail; + } + } + + nbl_dev_ctrl_register_flr_chan_msg(dev_mgt); + nbl_dev_ctrl_register_emp_ext_alert_chan_msg(dev_mgt); + + ret = nbl_dev_setup_chan_queue(dev_mgt, NBL_CHAN_TYPE_ADMINQ); + if (ret) { + dev_err(dev, "ctrl dev setup chan queue failed\n"); + goto setup_chan_q_fail; + } + + ret = nbl_dev_setup_ctrl_dev_task(dev_mgt); + if (ret) { + dev_err(dev, "ctrl dev task failed\n"); + goto setup_ctrl_dev_task_fail; + } + + serv_ops->get_part_number(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + part_number); + serv_ops->get_serial_number(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + serial_number); + dev_info(dev, "part number: %s, serial number: %s\n", part_number, + serial_number); + + nbl_dev_update_template_config(dev_mgt); + + return 0; + +setup_ctrl_dev_task_fail: + nbl_dev_remove_chan_queue(dev_mgt, NBL_CHAN_TYPE_ADMINQ); +setup_chan_q_fail: + serv_ops->stop_mgt_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +mgt_flow_fail: + serv_ops->destroy_chip(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +chip_init_fail: + destroy_workqueue(ctrl_dev->ctrl_dev_wq1); +alloc_wq_fail: + devm_kfree(dev, ctrl_dev); + NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt) = NULL; +alloc_fail: + nbl_dev_free_board_id(&board_id_table, board_key); + return ret; +} + +static void nbl_dev_remove_ctrl_dev(struct nbl_adapter *adapter) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_ctrl **ctrl_dev = &NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + u32 board_key; + + if (!*ctrl_dev) + return; + + board_key = pci_domain_nr(dev_mgt->common->pdev->bus) << 16 | + dev_mgt->common->pdev->bus->number; + nbl_dev_remove_chan_queue(dev_mgt, NBL_CHAN_TYPE_ADMINQ); + nbl_dev_remove_ctrl_dev_task(dev_mgt); + + serv_ops->stop_mgt_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + serv_ops->destroy_chip(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + + destroy_workqueue((*ctrl_dev)->ctrl_dev_wq1); + devm_kfree(NBL_ADAP_TO_DEV(adapter), *ctrl_dev); + *ctrl_dev = NULL; + + /* If it is not nic, this free function will do nothing, + *so no need check + */ + nbl_dev_free_board_id(&board_id_table, board_key); +} + +static int nbl_dev_netdev_open(struct net_device *netdev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->netdev_open(netdev); +} + +static int nbl_dev_netdev_stop(struct net_device *netdev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->netdev_stop(netdev); +} + +static netdev_tx_t nbl_dev_start_xmit(struct sk_buff *skb, + struct net_device *netdev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_resource_pt_ops *pt_ops = NBL_DEV_MGT_TO_RES_PT_OPS(dev_mgt); + + return pt_ops->start_xmit(skb, netdev); +} + +static void nbl_dev_netdev_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->get_stats64(netdev, stats); +} + +static int nbl_dev_netdev_rx_add_vid(struct net_device *netdev, __be16 proto, + u16 vid) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->rx_add_vid(netdev, proto, vid); +} + +static int nbl_dev_netdev_rx_kill_vid(struct net_device *netdev, __be16 proto, + u16 vid) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->rx_kill_vid(netdev, proto, vid); +} + +static const struct net_device_ops netdev_ops_leonis_pf = { + .ndo_open = nbl_dev_netdev_open, + .ndo_stop = nbl_dev_netdev_stop, + .ndo_start_xmit = nbl_dev_start_xmit, + .ndo_validate_addr = eth_validate_addr, + .ndo_get_stats64 = nbl_dev_netdev_get_stats64, + .ndo_vlan_rx_add_vid = nbl_dev_netdev_rx_add_vid, + .ndo_vlan_rx_kill_vid = nbl_dev_netdev_rx_kill_vid, + +}; + +static const struct net_device_ops netdev_ops_leonis_vf = { + .ndo_open = nbl_dev_netdev_open, + .ndo_stop = nbl_dev_netdev_stop, + .ndo_start_xmit = nbl_dev_start_xmit, + .ndo_validate_addr = eth_validate_addr, + .ndo_get_stats64 = nbl_dev_netdev_get_stats64, + .ndo_vlan_rx_add_vid = nbl_dev_netdev_rx_add_vid, + .ndo_vlan_rx_kill_vid = nbl_dev_netdev_rx_kill_vid, + +}; + +static int nbl_dev_setup_netops_leonis(void *priv, struct net_device *netdev, + struct nbl_init_param *param) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)priv; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + bool is_vf = param->caps.is_vf; + + if (is_vf) { + netdev->netdev_ops = &netdev_ops_leonis_vf; + } else { + netdev->netdev_ops = &netdev_ops_leonis_pf; + serv_ops->set_netdev_ops(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + &netdev_ops_leonis_pf, true); + } + return 0; +} + +static int nbl_dev_register_net(struct nbl_dev_mgt *dev_mgt, + struct nbl_register_net_result *register_result) +{ + struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct pci_dev *pdev = + NBL_COMMON_TO_PDEV(NBL_DEV_MGT_TO_COMMON(dev_mgt)); + struct nbl_register_net_param register_param = {0}; +#ifdef CONFIG_PCI_IOV + struct resource *res; +#endif + u16 pf_bdf; + u64 pf_bar_start; + u64 vf_bar_start, vf_bar_size; + u16 total_vfs = 0, offset, stride; + int pos; + u32 val; + int ret; + + pci_read_config_dword(pdev, PCI_BASE_ADDRESS_0, &val); + pf_bar_start = (u64)(val & PCI_BASE_ADDRESS_MEM_MASK); + pci_read_config_dword(pdev, PCI_BASE_ADDRESS_0 + 4, &val); + pf_bar_start |= ((u64)val << 32); + + register_param.pf_bar_start = pf_bar_start; + + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV); + if (pos) { + pf_bdf = PCI_DEVID(pdev->bus->number, pdev->devfn); + + pci_read_config_word(pdev, pos + PCI_SRIOV_VF_OFFSET, &offset); + pci_read_config_word(pdev, pos + PCI_SRIOV_VF_STRIDE, &stride); + pci_read_config_word(pdev, pos + PCI_SRIOV_TOTAL_VF, + &total_vfs); + + pci_read_config_dword(pdev, pos + PCI_SRIOV_BAR, &val); + vf_bar_start = (u64)(val & PCI_BASE_ADDRESS_MEM_MASK); + pci_read_config_dword(pdev, pos + PCI_SRIOV_BAR + 4, &val); + vf_bar_start |= ((u64)val << 32); + +#ifdef CONFIG_PCI_IOV + res = &pdev->resource[PCI_IOV_RESOURCES]; + vf_bar_size = resource_size(res); +#else + vf_bar_size = 0; +#endif + if (total_vfs) { + register_param.pf_bdf = pf_bdf; + register_param.vf_bar_start = vf_bar_start; + register_param.vf_bar_size = vf_bar_size; + register_param.total_vfs = total_vfs; + register_param.offset = offset; + register_param.stride = stride; + } + } + + net_dev->total_vfs = total_vfs; + + ret = serv_ops->register_net(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + ®ister_param, register_result); + + if (!register_result->tx_queue_num || !register_result->rx_queue_num) + return -EIO; + + return ret; +} + +static void nbl_dev_unregister_net(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + int ret; + + ret = serv_ops->unregister_net(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + if (ret) + dev_err(dev, "unregister net failed\n"); +} + +static u16 nbl_dev_vsi_alloc_queue(struct nbl_dev_net *net_dev, u16 queue_num) +{ + struct nbl_dev_vsi_controller *vsi_ctrl = &net_dev->vsi_ctrl; + u16 queue_offset = 0; + + if (vsi_ctrl->queue_free_offset + queue_num > net_dev->total_queue_num) + return -ENOSPC; + + queue_offset = vsi_ctrl->queue_free_offset; + vsi_ctrl->queue_free_offset += queue_num; + + return queue_offset; +} + +static int nbl_dev_vsi_common_setup(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, + struct nbl_dev_vsi *vsi) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct nbl_vsi_param vsi_param = { 0 }; + int ret; + + vsi->queue_offset = nbl_dev_vsi_alloc_queue(net_dev, + vsi->queue_num); + vsi_param.index = vsi->index; + vsi_param.vsi_id = vsi->vsi_id; + vsi_param.queue_offset = vsi->queue_offset; + vsi_param.queue_num = vsi->queue_num; + + /* Tell serv & res layer the mapping from vsi to queue_id */ + ret = serv_ops->register_vsi_info(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + &vsi_param); + return ret; +} + +static void nbl_dev_vsi_common_remove(struct nbl_dev_mgt *dev_mgt, + struct nbl_dev_vsi *vsi) +{ +} + +static int nbl_dev_vsi_data_register(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, + void *vsi_data) +{ + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + int ret; + + ret = nbl_dev_register_net(dev_mgt, &vsi->register_result); + if (ret) + return ret; + + vsi->queue_num = vsi->register_result.tx_queue_num; + vsi->queue_size = vsi->register_result.queue_size; + + nbl_debug(common, "Data vsi register, queue_num %d, queue_size %d", + vsi->queue_num, vsi->queue_size); + + return 0; +} + +static int nbl_dev_vsi_data_setup(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, void *vsi_data) +{ + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + return nbl_dev_vsi_common_setup(dev_mgt, param, vsi); +} + +static void nbl_dev_vsi_data_remove(struct nbl_dev_mgt *dev_mgt, void *vsi_data) +{ + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + nbl_dev_vsi_common_remove(dev_mgt, vsi); +} + +static int nbl_dev_vsi_ctrl_register(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, + void *vsi_data) +{ + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->get_rep_queue_info(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + &vsi->queue_num, &vsi->queue_size); + + nbl_debug(common, "Ctrl vsi register, queue_num %d, queue_size %d", + vsi->queue_num, vsi->queue_size); + return 0; +} + +static int nbl_dev_vsi_ctrl_setup(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, void *vsi_data) +{ + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + return nbl_dev_vsi_common_setup(dev_mgt, param, vsi); +} + +static void nbl_dev_vsi_ctrl_remove(struct nbl_dev_mgt *dev_mgt, void *vsi_data) +{ + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + nbl_dev_vsi_common_remove(dev_mgt, vsi); +} + +static struct nbl_dev_vsi_tbl vsi_tbl[NBL_VSI_MAX] = { + [NBL_VSI_DATA] = { + .vsi_ops = { + .register_vsi = nbl_dev_vsi_data_register, + .setup = nbl_dev_vsi_data_setup, + .remove = nbl_dev_vsi_data_remove, + }, + .vf_support = true, + .only_nic_support = false, + .in_kernel = true, + .use_independ_irq = true, + .static_queue = true, + }, + [NBL_VSI_CTRL] = { + .vsi_ops = { + .register_vsi = nbl_dev_vsi_ctrl_register, + .setup = nbl_dev_vsi_ctrl_setup, + .remove = nbl_dev_vsi_ctrl_remove, + }, + .vf_support = false, + .only_nic_support = true, + .in_kernel = true, + .use_independ_irq = true, + .static_queue = true, + }, +}; + +static int nbl_dev_vsi_build(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param) +{ + struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt); + struct nbl_dev_vsi *vsi = NULL; + int i; + + net_dev->vsi_ctrl.queue_num = 0; + net_dev->vsi_ctrl.queue_free_offset = 0; + + /* Build all vsi, and alloc vsi_id for each of them */ + for (i = 0; i < NBL_VSI_MAX; i++) { + if ((param->caps.is_vf && !vsi_tbl[i].vf_support) || + (!param->caps.is_nic && vsi_tbl[i].only_nic_support)) + continue; + + vsi = devm_kzalloc(NBL_DEV_MGT_TO_DEV(dev_mgt), sizeof(*vsi), + GFP_KERNEL); + if (!vsi) + goto malloc_vsi_fail; + + vsi->ops = &vsi_tbl[i].vsi_ops; + vsi->vsi_id = serv_ops->get_vsi_id(priv, 0, i); + vsi->index = i; + vsi->in_kernel = vsi_tbl[i].in_kernel; + vsi->use_independ_irq = vsi_tbl[i].use_independ_irq; + vsi->static_queue = vsi_tbl[i].static_queue; + net_dev->vsi_ctrl.vsi_list[i] = vsi; + } + + return 0; + +malloc_vsi_fail: + while (--i + 1) { + devm_kfree(NBL_DEV_MGT_TO_DEV(dev_mgt), + net_dev->vsi_ctrl.vsi_list[i]); + net_dev->vsi_ctrl.vsi_list[i] = NULL; + } + + return -ENOMEM; +} + +static void nbl_dev_vsi_destroy(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + int i; + + for (i = 0; i < NBL_VSI_MAX; i++) + if (net_dev->vsi_ctrl.vsi_list[i]) { + devm_kfree(NBL_DEV_MGT_TO_DEV(dev_mgt), + net_dev->vsi_ctrl.vsi_list[i]); + net_dev->vsi_ctrl.vsi_list[i] = NULL; + } +} + +struct nbl_dev_vsi *nbl_dev_vsi_select(struct nbl_dev_mgt *dev_mgt, + u8 vsi_index) +{ + struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct nbl_dev_vsi *vsi = NULL; + int i = 0; + + for (i = 0; i < NBL_VSI_MAX; i++) { + vsi = net_dev->vsi_ctrl.vsi_list[i]; + if (vsi && vsi->index == vsi_index) + return vsi; + } + + return NULL; +} + +static struct nbl_dev_net_ops netdev_ops[NBL_PRODUCT_MAX] = { + { + .setup_netdev_ops = nbl_dev_setup_netops_leonis, + }, +}; + +static void nbl_det_setup_net_dev_ops(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param) +{ + NBL_DEV_MGT_TO_NETDEV_OPS(dev_mgt) = &netdev_ops[param->product_type]; +} + +static int nbl_dev_setup_net_dev(struct nbl_adapter *adapter, + struct nbl_init_param *param) +{ + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_net **net_dev = &NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct device *dev = NBL_ADAP_TO_DEV(adapter); + struct nbl_dev_vsi *vsi; + u16 total_queue_num = 0, kernel_queue_num = 0; + u16 dynamic_queue_max = 0, irq_queue_num = 0; + int i, ret; + + *net_dev = devm_kzalloc(dev, sizeof(struct nbl_dev_net), GFP_KERNEL); + if (!*net_dev) + return -ENOMEM; + + ret = nbl_dev_vsi_build(dev_mgt, param); + if (ret) + goto vsi_build_fail; + + for (i = 0; i < NBL_VSI_MAX; i++) { + vsi = (*net_dev)->vsi_ctrl.vsi_list[i]; + + if (!vsi) + continue; + + ret = vsi->ops->register_vsi(dev_mgt, param, vsi); + if (ret) { + dev_err(NBL_DEV_MGT_TO_DEV(dev_mgt), + "Vsi %d register failed", vsi->index); + goto vsi_register_fail; + } + + if (vsi->static_queue) { + total_queue_num += vsi->queue_num; + } else { + if (dynamic_queue_max < vsi->queue_num) + dynamic_queue_max = vsi->queue_num; + } + + if (vsi->use_independ_irq) + irq_queue_num += vsi->queue_num; + + if (vsi->in_kernel) + kernel_queue_num += vsi->queue_num; + } + + /* all vsi's dynamic only support enable use one at the same time. */ + total_queue_num += dynamic_queue_max; + + /* the total queue set must before vsi stepup */ + (*net_dev)->total_queue_num = total_queue_num; + (*net_dev)->kernel_queue_num = kernel_queue_num; + + for (i = 0; i < NBL_VSI_MAX; i++) { + vsi = (*net_dev)->vsi_ctrl.vsi_list[i]; + + if (!vsi) + continue; + + if (!vsi->in_kernel) + continue; + + ret = vsi->ops->setup(dev_mgt, param, vsi); + if (ret) { + dev_err(NBL_DEV_MGT_TO_DEV(dev_mgt), + "Vsi %d setup failed", vsi->index); + goto vsi_setup_fail; + } + } + + nbl_dev_register_net_irq(dev_mgt, irq_queue_num); + + nbl_det_setup_net_dev_ops(dev_mgt, param); + + return 0; + +vsi_setup_fail: +vsi_register_fail: + nbl_dev_vsi_destroy(dev_mgt); +vsi_build_fail: + devm_kfree(dev, *net_dev); + return ret; +} + +static void nbl_dev_remove_net_dev(struct nbl_adapter *adapter) +{ + struct device *dev = NBL_ADAP_TO_DEV(adapter); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_net **net_dev = &NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct nbl_dev_vsi *vsi; + int i; + + if (!*net_dev) + return; + + for (i = 0; i < NBL_VSI_MAX; i++) { + vsi = (*net_dev)->vsi_ctrl.vsi_list[i]; + + if (!vsi) + continue; + + vsi->ops->remove(dev_mgt, vsi); + } + nbl_dev_vsi_destroy(dev_mgt); + + nbl_dev_unregister_net(dev_mgt); + + devm_kfree(dev, *net_dev); + *net_dev = NULL; +} + +static int nbl_dev_setup_dev_mgt(struct nbl_common_info *common, + struct nbl_dev_mgt **dev_mgt) +{ + *dev_mgt = devm_kzalloc(NBL_COMMON_TO_DEV(common), + sizeof(struct nbl_dev_mgt), GFP_KERNEL); + if (!*dev_mgt) + return -ENOMEM; + + (*dev_mgt)->common = common; + return 0; +} + +static void nbl_dev_remove_dev_mgt(struct nbl_common_info *common, + struct nbl_dev_mgt **dev_mgt) +{ + devm_kfree(NBL_COMMON_TO_DEV(common), *dev_mgt); + *dev_mgt = NULL; +} + +static void nbl_dev_remove_ops(struct device *dev, + struct nbl_dev_ops_tbl **dev_ops_tbl) +{ + devm_kfree(dev, *dev_ops_tbl); + *dev_ops_tbl = NULL; +} + +static int nbl_dev_setup_ops(struct device *dev, + struct nbl_dev_ops_tbl **dev_ops_tbl, + struct nbl_adapter *adapter) +{ + *dev_ops_tbl = + devm_kzalloc(dev, sizeof(struct nbl_dev_ops_tbl), GFP_KERNEL); + if (!*dev_ops_tbl) + return -ENOMEM; + + (*dev_ops_tbl)->ops = &dev_ops; + (*dev_ops_tbl)->priv = adapter; + + return 0; +} + +int nbl_dev_init(void *p, struct nbl_init_param *param) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct device *dev = NBL_ADAP_TO_DEV(adapter); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct nbl_dev_mgt **dev_mgt = + (struct nbl_dev_mgt **)&NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_ops_tbl **dev_ops_tbl = + &NBL_ADAP_TO_DEV_OPS_TBL(adapter); + struct nbl_service_ops_tbl *serv_ops_tbl = + NBL_ADAP_TO_SERV_OPS_TBL(adapter); + struct nbl_channel_ops_tbl *chan_ops_tbl = + NBL_ADAP_TO_CHAN_OPS_TBL(adapter); + int ret; + + ret = nbl_dev_setup_dev_mgt(common, dev_mgt); + if (ret) + goto setup_mgt_fail; + + (*dev_mgt)->serv_ops_tbl = serv_ops_tbl; + (*dev_mgt)->chan_ops_tbl = chan_ops_tbl; + + ret = nbl_dev_setup_common_dev(adapter, param); + if (ret) + goto setup_common_dev_fail; + + if (param->caps.has_ctrl) { + ret = nbl_dev_setup_ctrl_dev(adapter, param); + if (ret) + goto setup_ctrl_dev_fail; + } + + ret = nbl_dev_setup_net_dev(adapter, param); + if (ret) + goto setup_net_dev_fail; + + ret = nbl_dev_setup_ops(dev, dev_ops_tbl, adapter); + if (ret) + goto setup_ops_fail; + + return 0; + +setup_ops_fail: + nbl_dev_remove_net_dev(adapter); +setup_net_dev_fail: + nbl_dev_remove_ctrl_dev(adapter); +setup_ctrl_dev_fail: + nbl_dev_remove_common_dev(adapter); +setup_common_dev_fail: + nbl_dev_remove_dev_mgt(common, dev_mgt); +setup_mgt_fail: + return ret; +} + +void nbl_dev_remove(void *p) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct device *dev = NBL_ADAP_TO_DEV(adapter); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct nbl_dev_mgt **dev_mgt = + (struct nbl_dev_mgt **)&NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_ops_tbl **dev_ops_tbl = + &NBL_ADAP_TO_DEV_OPS_TBL(adapter); + + nbl_dev_remove_ops(dev, dev_ops_tbl); + nbl_dev_remove_net_dev(adapter); + nbl_dev_remove_ctrl_dev(adapter); + nbl_dev_remove_common_dev(adapter); + + nbl_dev_remove_dev_mgt(common, dev_mgt); +} + +static void nbl_dev_handle_fatal_err(struct nbl_dev_mgt *dev_mgt) +{ +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h new file mode 100644 index 000000000000..3b1cf6eea915 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h @@ -0,0 +1,250 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEV_H_ +#define _NBL_DEV_H_ + +#include "nbl_core.h" + +#define NBL_DEV_MGT_TO_COMMON(dev_mgt) ((dev_mgt)->common) +#define NBL_DEV_MGT_TO_DEV(dev_mgt) \ + NBL_COMMON_TO_DEV(NBL_DEV_MGT_TO_COMMON(dev_mgt)) +#define NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt) ((dev_mgt)->common_dev) +#define NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt) ((dev_mgt)->ctrl_dev) +#define NBL_DEV_MGT_TO_NET_DEV(dev_mgt) ((dev_mgt)->net_dev) +#define NBL_DEV_COMMON_TO_MSIX_INFO(dev_common) (&(dev_common)->msix_info) +#define NBL_DEV_CTRL_TO_TASK_INFO(dev_ctrl) (&(dev_ctrl)->task_info) +#define NBL_DEV_MGT_TO_NETDEV_OPS(dev_mgt) ((dev_mgt)->net_dev->ops) + +#define NBL_DEV_MGT_TO_SERV_OPS_TBL(dev_mgt) ((dev_mgt)->serv_ops_tbl) +#define NBL_DEV_MGT_TO_SERV_OPS(dev_mgt) \ + (NBL_DEV_MGT_TO_SERV_OPS_TBL(dev_mgt)->ops) +#define NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt) \ + (NBL_DEV_MGT_TO_SERV_OPS_TBL(dev_mgt)->priv) +#define NBL_DEV_MGT_TO_RES_PT_OPS(dev_mgt) \ + (&(NBL_DEV_MGT_TO_SERV_OPS_TBL(dev_mgt)->pt_ops)) +#define NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt) ((dev_mgt)->chan_ops_tbl) +#define NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt) \ + (NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->ops) +#define NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt) \ + (NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->priv) + +#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | \ + NETIF_MSG_LINK | NETIF_MSG_IFDOWN | \ + NETIF_MSG_IFUP) + +#define NBL_STRING_NAME_LEN 32 +#define NBL_DEFAULT_MTU 1500 + +#define NBL_DEV_BATCH_RESET_FUNC_NUM 32 +#define NBL_DEV_BATCH_RESET_USEC 1000000 + +#define NBL_DEV_FW_RESET_WAIT_TIME 3500 + +enum nbl_reset_status { + NBL_RESET_INIT, + NBL_RESET_SEND, + NBL_RESET_DONE, + NBL_RESET_STATUS_MAX +}; + +struct nbl_task_info { + struct nbl_adapter *adapter; + struct nbl_dev_mgt *dev_mgt; + struct work_struct fw_hb_task; + struct delayed_work fw_reset_task; + struct work_struct clean_adminq_task; + struct work_struct adapt_desc_gother_task; + struct work_struct clean_abnormal_irq_task; + struct work_struct recovery_abnormal_task; + struct work_struct report_temp_task; + struct work_struct report_reboot_task; + struct work_struct reset_task; + enum nbl_reset_event reset_event; + enum nbl_reset_status reset_status[NBL_MAX_FUNC]; + struct timer_list serv_timer; + unsigned long serv_timer_period; + + bool fw_resetting; + bool timer_setup; +}; + +struct nbl_reset_task_info { + struct work_struct task; + enum nbl_reset_event event; +}; + +enum nbl_msix_serv_type { + /* virtio_dev has a config vector_id, and the vector_id need is 0 */ + NBL_MSIX_VIRTIO_TYPE = 0, + NBL_MSIX_NET_TYPE, + NBL_MSIX_MAILBOX_TYPE, + NBL_MSIX_ABNORMAL_TYPE, + NBL_MSIX_ADMINDQ_TYPE, + NBL_MSIX_RDMA_TYPE, + NBL_MSIX_TYPE_MAX + +}; + +struct nbl_msix_serv_info { + char irq_name[NBL_STRING_NAME_LEN]; + u16 num; + u16 base_vector_id; + /* true: hw report msix, hw need to mask actively */ + bool hw_self_mask_en; +}; + +struct nbl_msix_info { + struct nbl_msix_serv_info serv_info[NBL_MSIX_TYPE_MAX]; + struct msix_entry *msix_entries; +}; + +struct nbl_dev_common { + struct nbl_dev_mgt *dev_mgt; + struct device *hwmon_dev; + struct nbl_msix_info msix_info; + char mailbox_name[NBL_STRING_NAME_LEN]; + // for ctrl-dev/net-dev mailbox recv msg + struct work_struct clean_mbx_task; + + struct nbl_reset_task_info reset_task; +}; + +enum nbl_dev_temp_status { + NBL_TEMP_STATUS_NORMAL = 0, + NBL_TEMP_STATUS_WARNING, + NBL_TEMP_STATUS_CRIT, + NBL_TEMP_STATUS_EMERG, + NBL_TEMP_STATUS_MAX +}; + +enum nbl_emp_log_level { + NBL_EMP_ALERT_LOG_FATAL = 0, + NBL_EMP_ALERT_LOG_ERROR = 1, + NBL_EMP_ALERT_LOG_WARNING = 2, + NBL_EMP_ALERT_LOG_INFO = 3, +}; + +struct nbl_dev_ctrl { + struct nbl_task_info task_info; + enum nbl_dev_temp_status temp_status; + struct workqueue_struct *ctrl_dev_wq1; +}; + +enum nbl_dev_emp_alert_event { + NBL_EMP_EVENT_TEMP_ALERT = 1, + NBL_EMP_EVENT_LOG_ALERT = 2, + NBL_EMP_EVENT_MAX +}; + +enum nbl_dev_temp_threshold { + NBL_TEMP_NOMAL_THRESHOLD = 85, + NBL_TEMP_WARNING_THRESHOLD = 105, + NBL_TEMP_CRIT_THRESHOLD = 115, + NBL_TEMP_EMERG_THRESHOLD = 120, +}; + +struct nbl_dev_temp_alarm_info { + int logvel; +#define NBL_TEMP_ALARM_STR_LEN 128 + char alarm_info[NBL_TEMP_ALARM_STR_LEN]; +}; + +struct nbl_dev_vsi_controller { + u16 queue_num; + u16 queue_free_offset; + void *vsi_list[NBL_VSI_MAX]; +}; + +struct nbl_dev_net_ops { + int (*setup_netdev_ops)(void *priv, struct net_device *netdev, + struct nbl_init_param *param); +}; + +struct nbl_dev_attr_info { + struct nbl_netdev_name_attr dev_name_attr; +}; + +struct nbl_dev_net { + struct net_device *netdev; + struct nbl_dev_attr_info dev_attr; + struct nbl_dev_net_ops *ops; + u8 eth_id; + struct nbl_dev_vsi_controller vsi_ctrl; + u16 total_queue_num; + u16 kernel_queue_num; + u16 total_vfs; +}; + +struct nbl_dev_mgt { + struct nbl_common_info *common; + struct nbl_service_ops_tbl *serv_ops_tbl; + struct nbl_channel_ops_tbl *chan_ops_tbl; + struct nbl_dev_common *common_dev; + struct nbl_dev_ctrl *ctrl_dev; + struct nbl_dev_net *net_dev; +}; + +struct nbl_dev_vsi_feature { + u16 has_lldp:1; + u16 has_lacp:1; + u16 rsv:14; +}; + +struct nbl_dev_vsi_ops { + int (*register_vsi)(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, void *vsi_data); + int (*setup)(struct nbl_dev_mgt *dev_mgt, struct nbl_init_param *param, + void *vsi_data); + void (*remove)(struct nbl_dev_mgt *dev_mgt, void *vsi_data); + int (*start)(void *dev_priv, struct net_device *netdev, void *vsi_data); + void (*stop)(void *dev_priv, void *vsi_data); + int (*netdev_build)(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, + struct net_device *netdev, void *vsi_data); + void (*netdev_destroy)(struct nbl_dev_mgt *dev_mgt, void *vsi_data); +}; + +struct nbl_dev_vsi { + struct nbl_dev_vsi_ops *ops; + struct net_device *netdev; + struct net_device *napi_netdev; + struct nbl_register_net_result register_result; + struct nbl_dev_vsi_feature feature; + u16 vsi_id; + u16 queue_offset; + u16 queue_num; + u16 queue_size; + u16 in_kernel; + u8 index; + bool enable; + bool use_independ_irq; + bool static_queue; +}; + +struct nbl_dev_vsi_tbl { + struct nbl_dev_vsi_ops vsi_ops; + bool vf_support; + bool only_nic_support; + u16 in_kernel; + bool use_independ_irq; + bool static_queue; +}; + +#define NBL_DEV_BOARD_ID_MAX NBL_DRIVER_DEV_MAX +struct nbl_dev_board_id_entry { + u32 board_key; /* domain << 16 | bus_id */ + u8 refcount; + bool valid; +}; + +struct nbl_dev_board_id_table { + struct nbl_dev_board_id_entry entry[NBL_DEV_BOARD_ID_MAX]; +}; + +struct nbl_dev_vsi *nbl_dev_vsi_select(struct nbl_dev_mgt *dev_mgt, + u8 vsi_index); +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c index c4ce5da65d8f..76a2a1513e2f 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c @@ -12,6 +12,994 @@ #include <linux/if_vlan.h> #include "nbl_service.h" + +static void nbl_serv_set_link_state(struct nbl_service_mgt *serv_mgt, + struct net_device *netdev); + +static void nbl_serv_set_queue_param(struct nbl_serv_ring *ring, u16 desc_num, + struct nbl_txrx_queue_param *param, + u16 vsi_id, u16 global_vec_id) +{ + param->vsi_id = vsi_id; + param->dma = ring->dma; + param->desc_num = desc_num; + param->local_queue_id = ring->local_queue_id / 2; + param->global_vec_id = global_vec_id; + param->intr_en = 1; + param->intr_mask = 1; + param->extend_header = 1; + param->rxcsum = 1; + param->split = 0; +} + +/* + * In virtio mode, the emulator triggers the configuration of + * txrx_registers only based on tx_ring, so the rx_info needs + * to be delivered first before the tx_info can be delivered. + */ +static int nbl_serv_setup_queues(struct nbl_service_mgt *serv_mgt, + struct nbl_serv_ring_vsi_info *vsi_info) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_txrx_queue_param param = {0}; + struct nbl_serv_ring *ring; + struct nbl_serv_vector *vector; + u16 start = vsi_info->ring_offset, + end = vsi_info->ring_offset + vsi_info->ring_num; + int i, ret = 0; + + for (i = start; i < end; i++) { + vector = &ring_mgt->vectors[i]; + ring = &ring_mgt->rx_rings[i]; + nbl_serv_set_queue_param(ring, ring_mgt->rx_desc_num, ¶m, + vsi_info->vsi_id, + vector->global_vec_id); + + ret = disp_ops->setup_queue(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + ¶m, false); + if (ret) + return ret; + } + + for (i = start; i < end; i++) { + vector = &ring_mgt->vectors[i]; + ring = &ring_mgt->tx_rings[i]; + nbl_serv_set_queue_param(ring, ring_mgt->tx_desc_num, ¶m, + vsi_info->vsi_id, + vector->global_vec_id); + + ret = disp_ops->setup_queue(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + ¶m, true); + if (ret) + return ret; + } + + return 0; +} + +static void nbl_serv_flush_rx_queues(struct nbl_service_mgt *serv_mgt, + u16 ring_offset, u16 ring_num) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + int i; + + for (i = ring_offset; i < ring_offset + ring_num; i++) + disp_ops->kick_rx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i); +} + +static int nbl_serv_setup_rings(struct nbl_service_mgt *serv_mgt, + struct net_device *netdev, + struct nbl_serv_ring_vsi_info *vsi_info, + bool use_napi) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + u16 start = vsi_info->ring_offset, + end = vsi_info->ring_offset + vsi_info->ring_num; + int i, ret = 0; + + for (i = start; i < end; i++) { + ring_mgt->tx_rings[i].dma = + disp_ops->start_tx_ring(p, i); + if (!ring_mgt->tx_rings[i].dma) { + netdev_err(netdev, "Fail to start tx ring %d", i); + ret = -EFAULT; + break; + } + } + if (i != end) { + while (--i + 1 > start) + disp_ops->stop_tx_ring(p, i); + goto tx_err; + } + + for (i = start; i < end; i++) { + ring_mgt->rx_rings[i].dma = + disp_ops->start_rx_ring(p, i, use_napi); + if (!ring_mgt->rx_rings[i].dma) { + netdev_err(netdev, "Fail to start rx ring %d", i); + ret = -EFAULT; + break; + } + } + if (i != end) { + while (--i + 1 > start) + disp_ops->stop_rx_ring(p, i); + goto rx_err; + } + + return 0; + +rx_err: + for (i = start; i < end; i++) + disp_ops->stop_tx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i); +tx_err: + return ret; +} + +static void nbl_serv_stop_rings(struct nbl_service_mgt *serv_mgt, + struct nbl_serv_ring_vsi_info *vsi_info) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + u16 start = vsi_info->ring_offset, + end = vsi_info->ring_offset + vsi_info->ring_num; + int i; + + for (i = start; i < end; i++) + disp_ops->stop_tx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i); + + for (i = start; i < end; i++) + disp_ops->stop_rx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i); +} + +static void nbl_serv_check_flow_table_spec(struct nbl_service_mgt *serv_mgt) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + int ret; + + if (!flow_mgt->force_promisc) + return; + + ret = disp_ops->check_flow_table_spec(p, + flow_mgt->vlan_list_cnt, + flow_mgt->unicast_mac_cnt + 1, + flow_mgt->multi_mac_cnt); + + if (!ret) { + flow_mgt->force_promisc = 0; + flow_mgt->pending_async_work = 1; + } +} + +static struct nbl_serv_vlan_node *nbl_serv_alloc_vlan_node(void) +{ + struct nbl_serv_vlan_node *vlan_node = NULL; + + vlan_node = kzalloc(sizeof(*vlan_node), GFP_ATOMIC); + if (!vlan_node) + return NULL; + + INIT_LIST_HEAD(&vlan_node->node); + vlan_node->ref_cnt = 1; + vlan_node->primary_mac_effective = 0; + vlan_node->sub_mac_effective = 0; + + return vlan_node; +} + +static void nbl_serv_free_vlan_node(struct nbl_serv_vlan_node *vlan_node) +{ + kfree(vlan_node); +} + +static int +nbl_serv_update_vlan_node_effective(struct nbl_service_mgt *serv_mgt, + struct nbl_serv_vlan_node *vlan_node, + bool effective, u16 vsi) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct net_device *dev = net_resource_mgt->netdev; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_submac_node *submac_node; + void *priv = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + bool force_promisc = 0; + int ret = 0, i = 0; + + if (vlan_node->primary_mac_effective == effective && + vlan_node->sub_mac_effective == effective) + return 0; + + if (effective && !vlan_node->primary_mac_effective) { + ret = disp_ops->add_macvlan(priv, + flow_mgt->mac, vlan_node->vid, vsi); + if (ret) + goto check_ret; + } else if (!effective && vlan_node->primary_mac_effective) { + disp_ops->del_macvlan(priv, + flow_mgt->mac, vlan_node->vid, vsi); + } + + vlan_node->primary_mac_effective = effective; + + for (i = 0; i < NBL_SUBMAC_MAX; i++) + list_for_each_entry(submac_node, &flow_mgt->submac_list[i], + node) { + if (!submac_node->effective) + continue; + + if (effective && !vlan_node->sub_mac_effective) { + ret = disp_ops->add_macvlan(priv, + submac_node->mac, + vlan_node->vid, + vsi); + if (ret) + goto del_macvlan_node; + } else if (!effective && vlan_node->sub_mac_effective) { + disp_ops->del_macvlan(priv, + submac_node->mac, + vlan_node->vid, vsi); + } + } + + vlan_node->sub_mac_effective = effective; + + return 0; + +del_macvlan_node: + for (i = 0; i < NBL_SUBMAC_MAX; i++) + list_for_each_entry(submac_node, &flow_mgt->submac_list[i], + node) { + if (submac_node->effective) + disp_ops->del_macvlan(priv, + submac_node->mac, + vlan_node->vid, vsi); + } +check_ret: + if (ret) { + force_promisc = 1; + if (flow_mgt->force_promisc ^ force_promisc) { + flow_mgt->force_promisc = force_promisc; + flow_mgt->pending_async_work = 1; + netdev_info(dev, "Reached VLAN filter limit, forcing promisc/allmuti mode\n"); + } + } + + if (vlan_node->primary_mac_effective == effective) + return 0; + + if (!NBL_COMMON_TO_VF_CAP(NBL_SERV_MGT_TO_COMMON(serv_mgt))) + return 0; + + return ret; +} + +static void nbl_serv_set_sfp_state(void *priv, struct net_device *netdev, + u8 eth_id, bool open, bool is_force) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + int ret = 0; + + if (is_force) { + if (open) { + ret = disp_ops->set_sfp_state(p, + eth_id, + NBL_SFP_MODULE_ON); + if (ret) + netdev_info(netdev, "Fail to open sfp\n"); + else + netdev_info(netdev, "open sfp\n"); + } else { + ret = disp_ops->set_sfp_state(p, + eth_id, + NBL_SFP_MODULE_OFF); + if (ret) + netdev_info(netdev, "Fail to close sfp\n"); + else + netdev_info(netdev, "close sfp\n"); + } + } +} + +static void nbl_serv_set_netdev_carrier_state(void *priv, + struct net_device *netdev, + u8 link_state) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + serv_mgt->net_resource_mgt; + + if (test_bit(NBL_DOWN, adapter->state)) + return; + + switch (net_resource_mgt->link_forced) { + case IFLA_VF_LINK_STATE_AUTO: + if (link_state) { + if (!netif_carrier_ok(netdev)) { + netif_carrier_on(netdev); + netdev_info(netdev, "Set nic link up\n"); + } + } else { + if (netif_carrier_ok(netdev)) { + netif_carrier_off(netdev); + netdev_info(netdev, "Set nic link down\n"); + } + } + return; + case IFLA_VF_LINK_STATE_ENABLE: + netif_carrier_on(netdev); + return; + case IFLA_VF_LINK_STATE_DISABLE: + netif_carrier_off(netdev); + return; + default: + netif_carrier_on(netdev); + return; + } +} + +static void nbl_serv_set_link_state(struct nbl_service_mgt *serv_mgt, + struct net_device *netdev) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt = + serv_mgt->net_resource_mgt; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_eth_link_info eth_link_info = {0}; + void *priv = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + u16 vsi_id = NBL_COMMON_TO_VSI_ID(common); + u8 eth_id = NBL_COMMON_TO_ETH_ID(common); + int ret = 0; + + net_resource_mgt->link_forced = + disp_ops->get_link_forced(priv, vsi_id); + + if (net_resource_mgt->link_forced == IFLA_VF_LINK_STATE_AUTO) { + ret = disp_ops->get_link_state(priv, + eth_id, ð_link_info); + if (ret) { + netdev_err(netdev, "Fail to get_link_state err %d\n", + ret); + eth_link_info.link_status = 1; + } + } + + nbl_serv_set_netdev_carrier_state(serv_mgt, netdev, + eth_link_info.link_status); +} + +int nbl_serv_vsi_open(void *priv, struct net_device *netdev, u16 vsi_index, + u16 real_qps, bool use_napi) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info = + &ring_mgt->vsi_info[vsi_index]; + int ret = 0; + + if (vsi_info->started) + return 0; + + ret = nbl_serv_setup_rings(serv_mgt, netdev, vsi_info, use_napi); + if (ret) { + netdev_err(netdev, "Fail to setup rings\n"); + goto setup_rings_fail; + } + + ret = nbl_serv_setup_queues(serv_mgt, vsi_info); + if (ret) { + netdev_err(netdev, "Fail to setup queues\n"); + goto setup_queue_fail; + } + nbl_serv_flush_rx_queues(serv_mgt, vsi_info->ring_offset, + vsi_info->ring_num); + + if (vsi_index == NBL_VSI_DATA) + disp_ops->cfg_txrx_vlan(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + net_resource_mgt->vlan_tci, + net_resource_mgt->vlan_proto, + vsi_index); + + ret = disp_ops->cfg_dsch(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_info->vsi_id, true); + if (ret) { + netdev_err(netdev, "Fail to setup dsch\n"); + goto setup_dsch_fail; + } + + vsi_info->active_ring_num = real_qps; + ret = disp_ops->setup_cqs(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_info->vsi_id, real_qps, false); + if (ret) + goto setup_cqs_fail; + + vsi_info->started = true; + return 0; + +setup_cqs_fail: + disp_ops->cfg_dsch(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common), false); +setup_dsch_fail: + disp_ops->remove_all_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common)); +setup_queue_fail: + nbl_serv_stop_rings(serv_mgt, vsi_info); +setup_rings_fail: + return ret; +} + +int nbl_serv_vsi_stop(void *priv, u16 vsi_index) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info = + &ring_mgt->vsi_info[vsi_index]; + + if (!vsi_info->started) + return 0; + + vsi_info->started = false; + /* modify defalt action and rss configuration */ + disp_ops->remove_cqs(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_info->vsi_id); + + /* clear dsch config */ + disp_ops->cfg_dsch(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_info->vsi_id, false); + + /* disable and rest tx/rx logic queue */ + disp_ops->remove_all_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_info->vsi_id); + + /* free tx and rx bufs */ + nbl_serv_stop_rings(serv_mgt, vsi_info); + + return 0; +} + +static int nbl_serv_abnormal_event_to_queue(int event_type) +{ + switch (event_type) { + case NBL_ABNORMAL_EVENT_DVN: + return NBL_TX; + case NBL_ABNORMAL_EVENT_UVN: + return NBL_RX; + default: + return event_type; + } +} + +static int +nbl_serv_chan_stop_abnormal_sw_queue_req(struct nbl_service_mgt *serv_mgt, + u16 local_queue_id, u16 func_id, + int type) +{ + struct nbl_channel_ops *chan_ops = NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt); + struct nbl_chan_param_stop_abnormal_sw_queue param = { 0 }; + struct nbl_chan_send_info chan_send = { 0 }; + int ret = 0; + + param.local_queue_id = local_queue_id; + param.type = type; + + NBL_CHAN_SEND(chan_send, func_id, NBL_CHAN_MSG_STOP_ABNORMAL_SW_QUEUE, + ¶m, sizeof(param), NULL, 0, 1); + ret = chan_ops->send_msg(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), + &chan_send); + + return ret; +} + +static dma_addr_t +nbl_serv_chan_restore_netdev_queue_req(struct nbl_service_mgt *serv_mgt, + u16 local_queue_id, u16 func_id, + int type) +{ + struct nbl_channel_ops *chan_ops = NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt); + struct nbl_chan_param_restore_queue param = { 0 }; + struct nbl_chan_send_info chan_send = { 0 }; + dma_addr_t dma = 0; + int ret = 0; + + param.local_queue_id = local_queue_id; + param.type = type; + + NBL_CHAN_SEND(chan_send, func_id, NBL_CHAN_MSG_RESTORE_NETDEV_QUEUE, + ¶m, sizeof(param), &dma, sizeof(dma), 1); + ret = chan_ops->send_msg(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), + &chan_send); + if (ret) + return 0; + + return dma; +} + +static int +nbl_serv_chan_restart_netdev_queue_req(struct nbl_service_mgt *serv_mgt, + u16 local_queue_id, u16 func_id, + int type) +{ + struct nbl_channel_ops *chan_ops = NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt); + struct nbl_chan_param_restart_queue param = { 0 }; + struct nbl_chan_send_info chan_send = { 0 }; + + param.local_queue_id = local_queue_id; + param.type = type; + + NBL_CHAN_SEND(chan_send, func_id, NBL_CHAN_MSG_RESTART_NETDEV_QUEUE, + ¶m, sizeof(param), NULL, 0, 1); + return chan_ops->send_msg(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), + &chan_send); +} + +static int nbl_serv_start_abnormal_hw_queue(struct nbl_service_mgt *serv_mgt, + u16 vsi_id, u16 local_queue_id, + dma_addr_t dma, int type) +{ + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_txrx_queue_param param = { 0 }; + struct nbl_serv_vector *vector; + struct nbl_serv_ring *ring; + int ret = 0; + + switch (type) { + case NBL_TX: + vector = &ring_mgt->vectors[local_queue_id]; + ring = &ring_mgt->tx_rings[local_queue_id]; + ring->dma = dma; + nbl_serv_set_queue_param(ring, ring_mgt->tx_desc_num, ¶m, + vsi_id, vector->global_vec_id); + ret = disp_ops->setup_queue(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + ¶m, true); + return ret; + case NBL_RX: + vector = &ring_mgt->vectors[local_queue_id]; + ring = &ring_mgt->rx_rings[local_queue_id]; + ring->dma = dma; + + nbl_serv_set_queue_param(ring, ring_mgt->rx_desc_num, ¶m, + vsi_id, vector->global_vec_id); + ret = disp_ops->setup_queue(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + ¶m, false); + return 0; + default: + break; + } + + return -EINVAL; +} + +static void nbl_serv_restore_queue(struct nbl_service_mgt *serv_mgt, u16 vsi_id, + u16 local_queue_id, u16 type, bool dif_err) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + void *priv = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + u16 global_queue_id; + u16 func_id; + dma_addr_t dma = 0; + int ret = 0; + + func_id = disp_ops->get_function_id(priv, vsi_id); + while (!rtnl_trylock()) + msleep(20); + + ret = nbl_serv_chan_stop_abnormal_sw_queue_req(serv_mgt, local_queue_id, + func_id, type); + if (ret) + goto unlock; + + ret = disp_ops->stop_abnormal_hw_queue(priv, vsi_id, + local_queue_id, type); + if (ret) + goto unlock; + + dma = nbl_serv_chan_restore_netdev_queue_req(serv_mgt, local_queue_id, + func_id, type); + if (!dma) + goto unlock; + + ret = nbl_serv_start_abnormal_hw_queue(serv_mgt, vsi_id, local_queue_id, + dma, type); + if (ret) + goto unlock; + + ret = nbl_serv_chan_restart_netdev_queue_req(serv_mgt, local_queue_id, + func_id, type); + if (ret) + goto unlock; + + if (dif_err && type == NBL_TX) { + global_queue_id = + disp_ops->get_vsi_global_queue_id(priv, + vsi_id, + local_queue_id); + nbl_info(common, + "dvn int_status:0, queue_id:%d\n", global_queue_id); + } + +unlock: + rtnl_unlock(); +} + +int nbl_serv_netdev_open(struct net_device *netdev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct nbl_serv_ring_vsi_info *vsi_info; + int num_cpus, real_qps, ret = 0; + + if (!test_bit(NBL_DOWN, adapter->state)) + return -EBUSY; + + netdev_dbg(netdev, "Nbl open\n"); + + netif_carrier_off(netdev); + nbl_serv_set_sfp_state(serv_mgt, netdev, NBL_COMMON_TO_ETH_ID(common), + true, false); + vsi_info = &ring_mgt->vsi_info[NBL_VSI_DATA]; + + if (vsi_info->active_ring_num) { + real_qps = vsi_info->active_ring_num; + } else { + num_cpus = num_online_cpus(); + real_qps = num_cpus > vsi_info->ring_num ? vsi_info->ring_num : + num_cpus; + } + + ret = nbl_serv_vsi_open(serv_mgt, netdev, NBL_VSI_DATA, real_qps, 1); + if (ret) + goto vsi_open_fail; + + ret = netif_set_real_num_tx_queues(netdev, real_qps); + if (ret) + goto setup_real_qps_fail; + ret = netif_set_real_num_rx_queues(netdev, real_qps); + if (ret) + goto setup_real_qps_fail; + + netif_tx_start_all_queues(netdev); + clear_bit(NBL_DOWN, adapter->state); + set_bit(NBL_RUNNING, adapter->state); + nbl_serv_set_link_state(serv_mgt, netdev); + + netdev_dbg(netdev, "Nbl open ok!\n"); + + return 0; + +setup_real_qps_fail: + nbl_serv_vsi_stop(serv_mgt, NBL_VSI_DATA); +vsi_open_fail: + return ret; +} + +int nbl_serv_netdev_stop(struct net_device *netdev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + + if (!test_bit(NBL_RUNNING, adapter->state)) + return -EBUSY; + + netdev_dbg(netdev, "Nbl stop\n"); + set_bit(NBL_DOWN, adapter->state); + clear_bit(NBL_RUNNING, adapter->state); + + nbl_serv_set_sfp_state(serv_mgt, netdev, NBL_COMMON_TO_ETH_ID(common), + false, false); + + netif_tx_stop_all_queues(netdev); + netif_carrier_off(netdev); + netif_tx_disable(netdev); + synchronize_net(); + nbl_serv_vsi_stop(serv_mgt, NBL_VSI_DATA); + netdev_dbg(netdev, "Nbl stop ok!\n"); + + return 0; +} + +static int nbl_serv_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(dev); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_netdev_priv *priv = netdev_priv(dev); + struct nbl_serv_vlan_node *vlan_node; + bool effective = true; + int ret = 0; + + if (vid == NBL_DEFAULT_VLAN_ID) + return 0; + + if (flow_mgt->vid != 0) + effective = false; + + if (!flow_mgt->ucast_flow_en) + effective = false; + + if (!flow_mgt->trusted_en && + flow_mgt->vlan_list_cnt >= NBL_NO_TRUST_MAX_VLAN) + return -ENOSPC; + + netif_dbg(common, drv, dev, + "add mac-vlan dev for proto 0x%04x, vid %u.", + be16_to_cpu(proto), vid); + + list_for_each_entry(vlan_node, &flow_mgt->vlan_list, node) { + netif_dbg(common, drv, dev, "add mac-vlan dev vid %u.", + vlan_node->vid); + if (vlan_node->vid == vid) { + vlan_node->ref_cnt++; + return 0; + } + } + + vlan_node = nbl_serv_alloc_vlan_node(); + if (!vlan_node) + return -ENOMEM; + + vlan_node->vid = vid; + ret = nbl_serv_update_vlan_node_effective(serv_mgt, vlan_node, + effective, priv->data_vsi); + if (ret) + goto add_macvlan_failed; + list_add(&vlan_node->node, &flow_mgt->vlan_list); + flow_mgt->vlan_list_cnt++; + + nbl_serv_check_flow_table_spec(serv_mgt); + + return 0; + +add_macvlan_failed: + nbl_serv_free_vlan_node(vlan_node); + return ret; +} + +static int nbl_serv_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(dev); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_netdev_priv *priv = netdev_priv(dev); + u16 data_vsi = priv->data_vsi; + struct nbl_serv_vlan_node *vlan_node; + + if (vid == NBL_DEFAULT_VLAN_ID) + return 0; + + netif_dbg(common, drv, dev, + "del mac-vlan dev for proto 0x%04x, vid %u.", + be16_to_cpu(proto), vid); + + list_for_each_entry(vlan_node, &flow_mgt->vlan_list, node) { + netif_dbg(common, drv, dev, "del mac-vlan dev vid %u.", + vlan_node->vid); + if (vlan_node->vid == vid) { + vlan_node->ref_cnt--; + if (!vlan_node->ref_cnt) { + nbl_serv_update_vlan_node_effective(serv_mgt, + vlan_node, + 0, + data_vsi); + list_del(&vlan_node->node); + flow_mgt->vlan_list_cnt--; + nbl_serv_free_vlan_node(vlan_node); + } + break; + } + } + + nbl_serv_check_flow_table_spec(serv_mgt); + + return 0; +} + +static void nbl_serv_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_stats net_stats = { 0 }; + + if (!stats) { + netdev_err(netdev, "get_link_stats64 stats is null\n"); + return; + } + + disp_ops->get_net_stats(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + &net_stats); + + stats->rx_packets = net_stats.rx_packets; + stats->tx_packets = net_stats.tx_packets; + stats->rx_bytes = net_stats.rx_bytes; + stats->tx_bytes = net_stats.tx_bytes; + stats->multicast = net_stats.rx_multicast_packets; + + stats->rx_errors = 0; + stats->tx_errors = 0; + stats->rx_length_errors = netdev->stats.rx_length_errors; + stats->rx_crc_errors = netdev->stats.rx_crc_errors; + stats->rx_frame_errors = netdev->stats.rx_frame_errors; + stats->rx_dropped = 0; + stats->tx_dropped = 0; +} + +static int +nbl_serv_register_net(void *priv, struct nbl_register_net_param *register_param, + struct nbl_register_net_result *register_result) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->register_net(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + register_param, register_result); +} + +static int nbl_serv_unregister_net(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + return disp_ops->unregister_net(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +} + +static int nbl_serv_start_mgt_flow(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->setup_multi_group(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +} + +static void nbl_serv_stop_mgt_flow(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + return disp_ops->remove_multi_group(p); +} + +static u32 nbl_serv_get_tx_headroom(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_tx_headroom(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +} + +/* + * This ops get fix product capability from resource layer, this capability + * fix by product_type, no need get from ctrl device + */ +static bool nbl_serv_get_product_fix_cap(void *priv, + enum nbl_fix_cap_type cap_type) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + return disp_ops->get_product_fix_cap(p, cap_type); +} + +static int nbl_serv_init_chip(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + struct nbl_common_info *common; + struct device *dev; + int ret = 0; + + common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + dev = NBL_COMMON_TO_DEV(common); + + ret = disp_ops->init_chip_module(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); + if (ret) { + dev_err(dev, "init_chip_module failed\n"); + goto module_init_fail; + } + + ret = disp_ops->queue_init(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); + if (ret) { + dev_err(dev, "queue_init failed\n"); + goto queue_init_fail; + } + + ret = disp_ops->vsi_init(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); + if (ret) { + dev_err(dev, "vsi_init failed\n"); + goto vsi_init_fail; + } + + return 0; + +vsi_init_fail: +queue_init_fail: +module_init_fail: + return ret; +} + +static int nbl_serv_destroy_chip(void *p) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)p; + struct nbl_dispatch_ops *disp_ops; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + if (!disp_ops->get_product_fix_cap(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_NEED_DESTROY_CHIP)) + return 0; + + disp_ops->deinit_chip_module(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); + return 0; +} + +static u16 nbl_serv_get_vsi_id(void *priv, u16 func_id, u16 type) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_vsi_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + func_id, type); +} + +static void nbl_serv_get_eth_id(void *priv, u16 vsi_id, u8 *eth_mode, + u8 *eth_id, u8 *logic_eth_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_eth_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id, + eth_mode, eth_id, logic_eth_id); +} + +static void nbl_serv_get_rep_queue_info(void *priv, u16 *queue_num, + u16 *queue_size) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->get_rep_queue_info(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + queue_num, queue_size); +} + +static void nbl_serv_set_netdev_ops(void *priv, + const struct net_device_ops *net_device_ops, + bool is_pf) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct device *dev = NBL_SERV_MGT_TO_DEV(serv_mgt); + + dev_dbg(dev, "set netdev ops:%p is_pf:%d\n", net_device_ops, is_pf); + if (is_pf) + net_resource_mgt->netdev_ops.pf_netdev_ops = + (void *)net_device_ops; +} + static void nbl_serv_setup_flow_mgt(struct nbl_serv_flow_mgt *flow_mgt) { int i = 0; @@ -21,7 +1009,375 @@ static void nbl_serv_setup_flow_mgt(struct nbl_serv_flow_mgt *flow_mgt) INIT_LIST_HEAD(&flow_mgt->submac_list[i]); } +static u8 __iomem *nbl_serv_get_hw_addr(void *priv, size_t *size) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_hw_addr(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), size); +} + +static u16 nbl_serv_get_function_id(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_function_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_id); +} + +static void nbl_serv_get_real_bdf(void *priv, u16 vsi_id, u8 *bus, u8 *dev, + u8 *function) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_real_bdf(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_id, bus, dev, function); +} + +static bool nbl_serv_check_fw_heartbeat(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + struct nbl_dispatch_ops *disp_ops; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->check_fw_heartbeat(p); +} + +static bool nbl_serv_check_fw_reset(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + struct nbl_dispatch_ops *disp_ops; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->check_fw_reset(p); +} + +static void nbl_serv_get_common_irq_num(void *priv, + struct nbl_common_irq_num *irq_num) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + irq_num->mbx_irq_num = + disp_ops->get_mbx_irq_num(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +} + +static void nbl_serv_get_ctrl_irq_num(void *priv, + struct nbl_ctrl_irq_num *irq_num) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + irq_num->adminq_irq_num = disp_ops->get_adminq_irq_num(p); + irq_num->abnormal_irq_num = + disp_ops->get_abnormal_irq_num(p); +} + +static int nbl_serv_get_port_attributes(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + int ret; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + ret = disp_ops->get_port_attributes(p); + if (ret) + return -EIO; + + return 0; +} + +static int nbl_serv_update_template_config(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + int ret; + + ret = disp_ops->update_ring_num(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); + if (ret) + return ret; + + return 0; +} + +static int nbl_serv_get_part_number(void *priv, char *part_number) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_part_number(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + part_number); +} + +static int nbl_serv_get_serial_number(void *priv, char *serial_number) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_serial_number(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + serial_number); +} + +static int nbl_serv_enable_port(void *priv, bool enable) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + int ret; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + ret = disp_ops->enable_port(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + enable); + if (ret) + return -EIO; + + return 0; +} + +static int nbl_serv_set_eth_mac_addr(void *priv, u8 *mac, u8 eth_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + if (NBL_COMMON_TO_VF_CAP(common)) + return 0; + else + return disp_ops->set_eth_mac_addr(p, + mac, eth_id); +} + +static void nbl_serv_adapt_desc_gother(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + disp_ops->adapt_desc_gother(p); +} + +static void nbl_serv_process_flr(void *priv, u16 vfid) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->flr_clear_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vfid); + disp_ops->flr_clear_flows(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vfid); + disp_ops->flr_clear_interrupt(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vfid); + disp_ops->flr_clear_net(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vfid); +} + +static u16 nbl_serv_covert_vfid_to_vsi_id(void *priv, u16 vfid) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + return disp_ops->covert_vfid_to_vsi_id(p, vfid); +} + +static void nbl_serv_recovery_abnormal(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->unmask_all_interrupts(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +} + +static void nbl_serv_keep_alive(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->keep_alive(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +} + +static int nbl_serv_register_vsi_info(void *priv, + struct nbl_vsi_param *vsi_param) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + u16 vsi_index = vsi_param->index; + u32 num_cpus; + + ring_mgt->vsi_info[vsi_index].vsi_index = vsi_index; + ring_mgt->vsi_info[vsi_index].vsi_id = vsi_param->vsi_id; + ring_mgt->vsi_info[vsi_index].ring_offset = vsi_param->queue_offset; + ring_mgt->vsi_info[vsi_index].ring_num = vsi_param->queue_num; + + /* init active ring number before first open, guarantee fd direct + *config check success. + */ + num_cpus = num_online_cpus(); + ring_mgt->vsi_info[vsi_index].active_ring_num = + (u16)num_cpus > vsi_param->queue_num ? vsi_param->queue_num : + (u16)num_cpus; + + /* + * Clear cfgs, in case this function exited abnormaly last time. + * only for data vsi, vf in vm only support data vsi. + * DPDK user vsi can not leak resource. + */ + if (vsi_index == NBL_VSI_DATA) + disp_ops->clear_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_param->vsi_id); + disp_ops->register_vsi_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_index, vsi_param->queue_offset, + vsi_param->queue_num); + + return disp_ops->register_vsi2q(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_index, vsi_param->vsi_id, + vsi_param->queue_offset, + vsi_param->queue_num); +} + +static int nbl_serv_get_board_id(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_board_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +} + +static int nbl_serv_process_abnormal_event(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_abnormal_event_info abnomal_info; + struct nbl_abnormal_details *detail; + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + u16 local_queue_id; + int type, i, ret = 0; + + memset(&abnomal_info, 0, sizeof(abnomal_info)); + + ret = disp_ops->process_abnormal_event(p, &abnomal_info); + if (!ret) + return ret; + + for (i = 0; i < NBL_ABNORMAL_EVENT_MAX; i++) { + detail = &abnomal_info.details[i]; + + if (!detail->abnormal) + continue; + + type = nbl_serv_abnormal_event_to_queue(i); + local_queue_id = disp_ops->get_local_queue_id(p, + detail->vsi_id, + detail->qid); + if (local_queue_id == U16_MAX) + return 0; + + nbl_serv_restore_queue(serv_mgt, detail->vsi_id, local_queue_id, + type, true); + } + + return 0; +} + +static void nbl_serv_set_hw_status(void *priv, enum nbl_hw_status hw_status) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->set_hw_status(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), hw_status); +} + +static void nbl_serv_get_active_func_bitmaps(void *priv, unsigned long *bitmap, + int max_func) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->get_active_func_bitmaps(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + bitmap, max_func); +} + +u16 nbl_serv_get_vf_function_id(void *priv, int vf_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + if (vf_id >= net_resource_mgt->total_vfs) + return U16_MAX; + + return disp_ops->get_vf_function_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common), + vf_id); +} + static struct nbl_service_ops serv_ops = { + .init_chip = nbl_serv_init_chip, + .destroy_chip = nbl_serv_destroy_chip, + + .get_common_irq_num = nbl_serv_get_common_irq_num, + .get_ctrl_irq_num = nbl_serv_get_ctrl_irq_num, + .get_port_attributes = nbl_serv_get_port_attributes, + .update_template_config = nbl_serv_update_template_config, + .get_part_number = nbl_serv_get_part_number, + .get_serial_number = nbl_serv_get_serial_number, + .enable_port = nbl_serv_enable_port, + .set_sfp_state = nbl_serv_set_sfp_state, + + .register_net = nbl_serv_register_net, + .unregister_net = nbl_serv_unregister_net, + + .register_vsi_info = nbl_serv_register_vsi_info, + + .start_mgt_flow = nbl_serv_start_mgt_flow, + .stop_mgt_flow = nbl_serv_stop_mgt_flow, + .get_tx_headroom = nbl_serv_get_tx_headroom, + .get_product_fix_cap = nbl_serv_get_product_fix_cap, + + .vsi_open = nbl_serv_vsi_open, + .vsi_stop = nbl_serv_vsi_stop, + /* For netdev ops */ + .netdev_open = nbl_serv_netdev_open, + .netdev_stop = nbl_serv_netdev_stop, + .rx_add_vid = nbl_serv_rx_add_vid, + .rx_kill_vid = nbl_serv_rx_kill_vid, + .get_stats64 = nbl_serv_get_stats64, + .get_rep_queue_info = nbl_serv_get_rep_queue_info, + + .set_netdev_ops = nbl_serv_set_netdev_ops, + + .get_vsi_id = nbl_serv_get_vsi_id, + .get_eth_id = nbl_serv_get_eth_id, + + .get_hw_addr = nbl_serv_get_hw_addr, + + .get_function_id = nbl_serv_get_function_id, + .get_real_bdf = nbl_serv_get_real_bdf, + .set_eth_mac_addr = nbl_serv_set_eth_mac_addr, + .process_abnormal_event = nbl_serv_process_abnormal_event, + .adapt_desc_gother = nbl_serv_adapt_desc_gother, + .process_flr = nbl_serv_process_flr, + .get_board_id = nbl_serv_get_board_id, + .covert_vfid_to_vsi_id = nbl_serv_covert_vfid_to_vsi_id, + .recovery_abnormal = nbl_serv_recovery_abnormal, + .keep_alive = nbl_serv_keep_alive, + + .check_fw_heartbeat = nbl_serv_check_fw_heartbeat, + .check_fw_reset = nbl_serv_check_fw_reset, + + .set_hw_status = nbl_serv_set_hw_status, + .get_active_func_bitmaps = nbl_serv_get_active_func_bitmaps, + .get_vf_function_id = nbl_serv_get_vf_function_id, }; /* Structure starts here, adding an op should not modify anything below */ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h index 457eac6fb3a7..1357a7f7f26f 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h @@ -14,6 +14,9 @@ #define NBL_SERV_MGT_TO_COMMON(serv_mgt) ((serv_mgt)->common) #define NBL_SERV_MGT_TO_DEV(serv_mgt) \ NBL_COMMON_TO_DEV(NBL_SERV_MGT_TO_COMMON(serv_mgt)) +#define NBL_NET_RES_MGT_TO_NETDEV(net_res_mgt) ((net_res_mgt)->netdev) +#define NBL_SERV_MGT_TO_NETDEV(serv_mgt) \ + NBL_NET_RES_MGT_TO_NETDEV(NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt)) #define NBL_SERV_MGT_TO_RING_MGT(serv_mgt) (&(serv_mgt)->ring_mgt) #define NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt) (&(serv_mgt)->flow_mgt) #define NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt) ((serv_mgt)->net_resource_mgt) @@ -195,7 +198,6 @@ struct nbl_service_mgt { struct nbl_serv_ring_mgt ring_mgt; struct nbl_serv_flow_mgt flow_mgt; struct nbl_serv_net_resource_mgt *net_resource_mgt; - }; struct nbl_serv_notify_vlan_param { diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h index c52a17acc4f3..8fe47b66fdbd 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_common.h @@ -211,6 +211,15 @@ struct nbl_common_info { bool wol_ena; }; +struct nbl_netdev_name_attr { + struct attribute attr; + ssize_t (*show)(struct device *dev, struct nbl_netdev_name_attr *attr, + char *buf); + ssize_t (*store)(struct device *dev, struct nbl_netdev_name_attr *attr, + const char *buf, size_t len); + char net_dev_name[IFNAMSIZ]; +}; + struct nbl_hash_tbl_key { struct device *dev; u16 key_size; diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h new file mode 100644 index 000000000000..2d60be4610a4 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_DEF_DEV_H_ +#define _NBL_DEF_DEV_H_ + +#include "nbl_include.h" + +#define NBL_DEV_OPS_TBL_TO_OPS(dev_ops_tbl) ((dev_ops_tbl)->ops) +#define NBL_DEV_OPS_TBL_TO_PRIV(dev_ops_tbl) ((dev_ops_tbl)->priv) + +struct nbl_dev_ops { +}; + +struct nbl_dev_ops_tbl { + struct nbl_dev_ops *ops; + void *priv; +}; + +int nbl_dev_init(void *p, struct nbl_init_param *param); +void nbl_dev_remove(void *p); + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h index dc261fda3aa5..6cab14b7cdfc 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h @@ -10,6 +10,78 @@ #include "nbl_include.h" struct nbl_service_ops { + int (*init_chip)(void *p); + int (*destroy_chip)(void *p); + void (*get_common_irq_num)(void *priv, + struct nbl_common_irq_num *irq_num); + void (*get_ctrl_irq_num)(void *priv, struct nbl_ctrl_irq_num *irq_num); + int (*get_port_attributes)(void *p); + int (*update_template_config)(void *priv); + int (*get_part_number)(void *priv, char *part_number); + int (*get_serial_number)(void *priv, char *serial_number); + int (*enable_port)(void *p, bool enable); + int (*vsi_open)(void *priv, struct net_device *netdev, u16 vsi_index, + u16 real_qps, bool use_napi); + int (*vsi_stop)(void *priv, u16 vsi_index); + int (*netdev_open)(struct net_device *netdev); + int (*netdev_stop)(struct net_device *netdev); + void (*get_stats64)(struct net_device *netdev, + struct rtnl_link_stats64 *stats); + void (*set_rx_mode)(struct net_device *dev); + void (*change_rx_flags)(struct net_device *dev, int flag); + int (*rx_add_vid)(struct net_device *dev, __be16 proto, u16 vid); + int (*rx_kill_vid)(struct net_device *dev, __be16 proto, u16 vid); + int (*set_features)(struct net_device *dev, netdev_features_t features); + netdev_features_t (*features_check)(struct sk_buff *skb, + struct net_device *dev, + netdev_features_t features); + int (*get_phys_port_name)(struct net_device *dev, char *name, + size_t len); + void (*tx_timeout)(struct net_device *netdev, u32 txqueue); + u16 (*select_queue)(struct net_device *netdev, struct sk_buff *skb, + struct net_device *sb_dev); + int (*register_net)(void *priv, + struct nbl_register_net_param *register_param, + struct nbl_register_net_result *register_result); + int (*unregister_net)(void *priv); + int (*register_vsi_info)(void *priv, struct nbl_vsi_param *vsi_param); + int (*start_mgt_flow)(void *priv); + void (*stop_mgt_flow)(void *priv); + u32 (*get_tx_headroom)(void *priv); + u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type); + void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id, + u8 *logic_eth_id); + void (*set_sfp_state)(void *priv, struct net_device *netdev, u8 eth_id, + bool open, bool is_force); + int (*get_board_id)(void *priv); + + void (*get_rep_queue_info)(void *priv, u16 *queue_num, u16 *queue_size); + void (*set_netdev_ops)(void *priv, + const struct net_device_ops *net_device_ops, + bool is_pf); + + u8 __iomem *(*get_hw_addr)(void *priv, size_t *size); + u16 (*get_function_id)(void *priv, u16 vsi_id); + void (*get_real_bdf)(void *priv, u16 vsi_id, u8 *bus, u8 *dev, + u8 *function); + int (*set_eth_mac_addr)(void *priv, u8 *mac, u8 eth_id); + int (*process_abnormal_event)(void *priv); + void (*adapt_desc_gother)(void *priv); + void (*process_flr)(void *priv, u16 vfid); + u16 (*covert_vfid_to_vsi_id)(void *priv, u16 vfid); + void (*recovery_abnormal)(void *priv); + void (*keep_alive)(void *priv); + + bool (*check_fw_heartbeat)(void *priv); + bool (*check_fw_reset)(void *priv); + + bool (*get_product_fix_cap)(void *priv, enum nbl_fix_cap_type cap_type); + void (*register_dev_name)(void *priv, u16 vsi_id, char *name); + void (*set_hw_status)(void *priv, enum nbl_hw_status hw_status); + void (*get_active_func_bitmaps)(void *priv, unsigned long *bitmap, + int max_func); + + u16 (*get_vf_function_id)(void *priv, int vf_id); }; struct nbl_service_ops_tbl { diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index af2439efb5db..38a9d47ab6ca 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -12,6 +12,8 @@ /* ------ Basic definitions ------- */ #define NBL_DRIVER_NAME "nbl_core" +#define NBL_DRIVER_DEV_MAX 24 + #define NBL_PAIR_ID_GET_TX(id) ((id) * 2 + 1) #define NBL_PAIR_ID_GET_RX(id) ((id) * 2) @@ -29,6 +31,9 @@ #define NBL_MAX_FUNC (520) #define NBL_MAX_MTU_NUM 15 + +#define SET_DEV_MIN_MTU(netdev, mtu) ((netdev)->min_mtu = (mtu)) +#define SET_DEV_MAX_MTU(netdev, mtu) ((netdev)->max_mtu = (mtu)) /* Used for macros to pass checkpatch */ #define NBL_NAME(x) x @@ -76,6 +81,12 @@ enum nbl_hw_status { NBL_HW_STATUS_MAX, }; +enum nbl_reset_event { + /* Most hw module is not work nomal exclude pcie/emp */ + NBL_HW_FATAL_ERR_EVENT, + NBL_HW_MAX_EVENT +}; + struct nbl_func_caps { u32 has_ctrl:1; u32 has_net:1; @@ -419,7 +430,48 @@ enum { NBL_FEATURES_COUNT }; +static const netdev_features_t nbl_netdev_features[] = { + [NBL_NETIF_F_SG_BIT] = NETIF_F_SG, + [NBL_NETIF_F_IP_CSUM_BIT] = NETIF_F_IP_CSUM, + [NBL_NETIF_F_IPV6_CSUM_BIT] = NETIF_F_IPV6_CSUM, + [NBL_NETIF_F_HIGHDMA_BIT] = NETIF_F_HIGHDMA, + [NBL_NETIF_F_HW_VLAN_CTAG_TX_BIT] = NETIF_F_HW_VLAN_CTAG_TX, + [NBL_NETIF_F_HW_VLAN_CTAG_RX_BIT] = NETIF_F_HW_VLAN_CTAG_RX, + [NBL_NETIF_F_HW_VLAN_CTAG_FILTER_BIT] = NETIF_F_HW_VLAN_CTAG_FILTER, + [NBL_NETIF_F_TSO_BIT] = NETIF_F_TSO, + [NBL_NETIF_F_GSO_ROBUST_BIT] = NETIF_F_GSO_ROBUST, + [NBL_NETIF_F_TSO6_BIT] = NETIF_F_TSO6, + [NBL_NETIF_F_GSO_GRE_BIT] = NETIF_F_GSO_GRE, + [NBL_NETIF_F_GSO_GRE_CSUM_BIT] = NETIF_F_GSO_GRE_CSUM, + [NBL_NETIF_F_GSO_UDP_TUNNEL_BIT] = NETIF_F_GSO_UDP_TUNNEL, + [NBL_NETIF_F_GSO_UDP_TUNNEL_CSUM_BIT] = NETIF_F_GSO_UDP_TUNNEL_CSUM, + [NBL_NETIF_F_GSO_PARTIAL_BIT] = NETIF_F_GSO_PARTIAL, + [NBL_NETIF_F_GSO_UDP_L4_BIT] = NETIF_F_GSO_UDP_L4, + [NBL_NETIF_F_SCTP_CRC_BIT] = NETIF_F_SCTP_CRC, + [NBL_NETIF_F_NTUPLE_BIT] = NETIF_F_NTUPLE, + [NBL_NETIF_F_RXHASH_BIT] = NETIF_F_RXHASH, + [NBL_NETIF_F_RXCSUM_BIT] = NETIF_F_RXCSUM, + [NBL_NETIF_F_HW_VLAN_STAG_TX_BIT] = NETIF_F_HW_VLAN_STAG_TX, + [NBL_NETIF_F_HW_VLAN_STAG_RX_BIT] = NETIF_F_HW_VLAN_STAG_RX, + [NBL_NETIF_F_HW_VLAN_STAG_FILTER_BIT] = NETIF_F_HW_VLAN_STAG_FILTER, + [NBL_NETIF_F_HW_TC_BIT] = NETIF_F_HW_TC, +}; + #define NBL_FEATURE(name) (1 << (NBL_##name##_BIT)) +#define NBL_FEATURE_TEST_BIT(val, loc) (((val) >> (loc)) & 0x1) + +static inline netdev_features_t nbl_features_to_netdev_features(u64 features) +{ + netdev_features_t netdev_features = 0; + int i = 0; + + for (i = 0; i < NBL_FEATURES_COUNT; i++) { + if (NBL_FEATURE_TEST_BIT(features, i)) + netdev_features += nbl_netdev_features[i]; + } + + return netdev_features; +}; enum nbl_abnormal_event_module { NBL_ABNORMAL_EVENT_DVN = 0, diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index c6b346e4ce47..6aca084d2b36 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -84,7 +84,14 @@ struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, ret = nbl_serv_init(adapter, param); if (ret) goto serv_init_fail; + + ret = nbl_dev_init(adapter, param); + if (ret) + goto dev_init_fail; return adapter; + +dev_init_fail: + nbl_serv_remove(adapter); serv_init_fail: nbl_disp_remove(adapter); disp_init_fail: @@ -105,6 +112,7 @@ void nbl_core_remove(struct nbl_adapter *adapter) dev = NBL_ADAP_TO_DEV(adapter); product_base_ops = NBL_ADAP_TO_RPDUCT_BASE_OPS(adapter); + nbl_dev_remove(adapter); nbl_serv_remove(adapter); nbl_disp_remove(adapter); product_base_ops->res_remove(adapter); @@ -291,7 +299,39 @@ static struct pci_driver nbl_driver = { .remove = nbl_remove, }; -module_pci_driver(nbl_driver); +static int __init nbl_module_init(void) +{ + int status; + + status = nbl_common_create_wq(); + if (status) { + pr_err("Failed to create wq, err = %d\n", status); + goto wq_create_failed; + } + status = pci_register_driver(&nbl_driver); + if (status) { + pr_err("Failed to register PCI driver, err = %d\n", status); + goto pci_register_driver_failed; + } + pr_info("nbl module loaded\n"); + return 0; + +pci_register_driver_failed: + nbl_common_destroy_wq(); +wq_create_failed: + return status; +} + +static void __exit nbl_module_exit(void) +{ + pci_unregister_driver(&nbl_driver); + + nbl_common_destroy_wq(); + + pr_info("nbl module unloaded\n"); +} +module_init(nbl_module_init); +module_exit(nbl_module_exit); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Nebula Matrix Network Driver"); -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 14/15] net/nebula-matrix: add Dev start, stop operation 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (12 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 13/15] net/nebula-matrix: add Dev init,remove operation illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 15/15] net/nebula-matrix: add st_sysfs and vf name sysfs illusion.wang 2026-01-10 0:20 ` [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs Jakub Kicinski 15 siblings, 0 replies; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list some important steps in dev start: 1.start common dev: config msix map table, alloc and enable msix vectors, register mailbox ISR and enable mailbox irq, set up chan keepalive task. 2.start ctrl dev: request abnormal and adminq ISR , enable them. schedule some ctrl tasks such as adapt desc gother task. 3.start net dev: 3.1 alloc netdev with multi-queue support, config private data and associatess with the adapter. 3.2 alloc tx/rx rings, set up network resource managements(vlan,rate limiting) 3.3 build the netdev structure, map queues to msix interrupts, init hw stats. 3.4 register link stats and reset event chan msg. 3.5 start net vsi and register net irq. 3.6 register netdev Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 3 + .../nebula-matrix/nbl/nbl_core/nbl_dev.c | 1620 ++++++++++++- .../nebula-matrix/nbl/nbl_core/nbl_service.c | 2036 ++++++++++++++++- .../nbl/nbl_include/nbl_def_dev.h | 4 + .../nbl/nbl_include/nbl_def_service.h | 56 + .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 49 + 6 files changed, 3737 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index 685d9f1831be..3db1364eefdc 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -127,4 +127,7 @@ struct nbl_netdev_priv { struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, struct nbl_init_param *param); void nbl_core_remove(struct nbl_adapter *adapter); +int nbl_core_start(struct nbl_adapter *adapter, struct nbl_init_param *param); +void nbl_core_stop(struct nbl_adapter *adapter); + #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c index 6b797d7ddbf8..a379a5851523 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c @@ -14,8 +14,25 @@ static struct nbl_dev_board_id_table board_id_table; static struct nbl_dev_ops dev_ops; +static int nbl_dev_clean_mailbox_schedule(struct nbl_dev_mgt *dev_mgt); +static void nbl_dev_clean_adminq_schedule(struct nbl_task_info *task_info); static void nbl_dev_handle_fatal_err(struct nbl_dev_mgt *dev_mgt); + /* ---------- Basic functions ---------- */ +static int nbl_dev_get_port_attributes(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->get_port_attributes(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +} + +static int nbl_dev_enable_port(struct nbl_dev_mgt *dev_mgt, bool enable) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->enable_port(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), enable); +} + static int nbl_dev_alloc_board_id(struct nbl_dev_board_id_table *index_table, u32 board_key) { @@ -58,7 +75,37 @@ static void nbl_dev_free_board_id(struct nbl_dev_board_id_table *index_table, sizeof(index_table->entry[i])); } +static void nbl_dev_set_netdev_priv(struct net_device *netdev, + struct nbl_dev_vsi *vsi) +{ + struct nbl_netdev_priv *net_priv = netdev_priv(netdev); + + net_priv->tx_queue_num = vsi->queue_num; + net_priv->rx_queue_num = vsi->queue_num; + net_priv->queue_size = vsi->queue_size; + net_priv->netdev = netdev; + net_priv->data_vsi = vsi->vsi_id; +} + /* ---------- Interrupt config ---------- */ +static irqreturn_t nbl_dev_clean_mailbox(int __always_unused irq, void *data) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)data; + + nbl_dev_clean_mailbox_schedule(dev_mgt); + + return IRQ_HANDLED; +} + +static irqreturn_t nbl_dev_clean_adminq(int __always_unused irq, void *data) +{ + struct nbl_task_info *task_info = (struct nbl_task_info *)data; + + nbl_dev_clean_adminq_schedule(task_info); + + return IRQ_HANDLED; +} + static void nbl_dev_handle_abnormal_event(struct work_struct *work) { struct nbl_task_info *task_info = container_of(work, @@ -70,6 +117,24 @@ static void nbl_dev_handle_abnormal_event(struct work_struct *work) serv_ops->process_abnormal_event(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); } +static void nbl_dev_clean_abnormal_status(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_ctrl *ctrl_dev = NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + struct nbl_task_info *task_info = NBL_DEV_CTRL_TO_TASK_INFO(ctrl_dev); + + nbl_common_queue_work(&task_info->clean_abnormal_irq_task, true); +} + +static irqreturn_t nbl_dev_clean_abnormal_event(int __always_unused irq, + void *data) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)data; + + nbl_dev_clean_abnormal_status(dev_mgt); + + return IRQ_HANDLED; +} + static void nbl_dev_register_common_irq(struct nbl_dev_mgt *dev_mgt) { struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); @@ -109,6 +174,486 @@ static void nbl_dev_register_ctrl_irq(struct nbl_dev_mgt *dev_mgt) irq_num.adminq_irq_num; } +static int nbl_dev_request_net_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct nbl_msix_info_param param = { 0 }; + int msix_num = msix_info->serv_info[NBL_MSIX_NET_TYPE].num; + int ret = 0; + + param.msix_entries = + kcalloc(msix_num, sizeof(*param.msix_entries), GFP_KERNEL); + if (!param.msix_entries) + return -ENOMEM; + + param.msix_num = msix_num; + memcpy(param.msix_entries, + msix_info->msix_entries + + msix_info->serv_info[NBL_MSIX_NET_TYPE].base_vector_id, + sizeof(param.msix_entries[0]) * msix_num); + + ret = serv_ops->request_net_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + ¶m); + + kfree(param.msix_entries); + return ret; +} + +static void nbl_dev_free_net_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct nbl_msix_info_param param = { 0 }; + int msix_num = msix_info->serv_info[NBL_MSIX_NET_TYPE].num; + + param.msix_entries = + kcalloc(msix_num, sizeof(*param.msix_entries), GFP_KERNEL); + if (!param.msix_entries) + return; + + param.msix_num = msix_num; + memcpy(param.msix_entries, + msix_info->msix_entries + + msix_info->serv_info[NBL_MSIX_NET_TYPE].base_vector_id, + sizeof(param.msix_entries[0]) * msix_num); + + serv_ops->free_net_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), ¶m); + + kfree(param.msix_entries); +} + +static int nbl_dev_request_mailbox_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + u16 local_vec_id; + u32 irq_num; + int err; + + if (!msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num) + return 0; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].base_vector_id; + irq_num = msix_info->msix_entries[local_vec_id].vector; + + snprintf(dev_common->mailbox_name, sizeof(dev_common->mailbox_name), + "nbl_mailbox@pci:%s", pci_name(NBL_COMMON_TO_PDEV(common))); + err = devm_request_irq(dev, irq_num, nbl_dev_clean_mailbox, 0, + dev_common->mailbox_name, dev_mgt); + if (err) { + dev_err(dev, "Request mailbox irq handler failed err: %d\n", + err); + return err; + } + + return 0; +} + +static void nbl_dev_free_mailbox_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + u16 local_vec_id; + u32 irq_num; + + if (!msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num) + return; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].base_vector_id; + irq_num = msix_info->msix_entries[local_vec_id].vector; + + devm_free_irq(dev, irq_num, dev_mgt); +} + +static int nbl_dev_enable_mailbox_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + u16 local_vec_id; + + if (!msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num) + return 0; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].base_vector_id; + chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_INTERRUPT_READY, + NBL_CHAN_TYPE_MAILBOX, true); + + return serv_ops->enable_mailbox_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + local_vec_id, true); +} + +static int nbl_dev_disable_mailbox_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + u16 local_vec_id; + + if (!msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].num) + return 0; + + if (serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_CLEAN_MAILBOX_CAP)) + nbl_common_flush_task(&dev_common->clean_mbx_task); + + local_vec_id = + msix_info->serv_info[NBL_MSIX_MAILBOX_TYPE].base_vector_id; + chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_INTERRUPT_READY, + NBL_CHAN_TYPE_MAILBOX, false); + + return serv_ops->enable_mailbox_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + local_vec_id, false); +} + +static int nbl_dev_request_adminq_irq(struct nbl_dev_mgt *dev_mgt, + struct nbl_task_info *task_info) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + u16 local_vec_id; + u32 irq_num; + char *irq_name; + int err; + + if (!msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].num) + return 0; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].base_vector_id; + irq_num = msix_info->msix_entries[local_vec_id].vector; + irq_name = msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].irq_name; + + snprintf(irq_name, NBL_STRING_NAME_LEN, "nbl_adminq@pci:%s", + pci_name(NBL_COMMON_TO_PDEV(common))); + err = devm_request_irq(dev, irq_num, nbl_dev_clean_adminq, 0, irq_name, + task_info); + if (err) { + dev_err(dev, "Request adminq irq handler failed err: %d\n", + err); + return err; + } + + return 0; +} + +static void nbl_dev_free_adminq_irq(struct nbl_dev_mgt *dev_mgt, + struct nbl_task_info *task_info) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + u16 local_vec_id; + u32 irq_num; + + if (!msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].num) + return; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].base_vector_id; + irq_num = msix_info->msix_entries[local_vec_id].vector; + + devm_free_irq(dev, irq_num, task_info); +} + +static int nbl_dev_enable_adminq_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + u16 local_vec_id; + + if (!msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].num) + return 0; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].base_vector_id; + chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_INTERRUPT_READY, + NBL_CHAN_TYPE_ADMINQ, true); + + return serv_ops->enable_adminq_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + local_vec_id, true); +} + +static int nbl_dev_disable_adminq_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + u16 local_vec_id; + + if (!msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].num) + return 0; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_ADMINDQ_TYPE].base_vector_id; + chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_INTERRUPT_READY, + NBL_CHAN_TYPE_ADMINQ, false); + + return serv_ops->enable_adminq_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + local_vec_id, false); +} + +static int nbl_dev_request_abnormal_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + char *irq_name; + u32 irq_num; + int err; + u16 local_vec_id; + + if (!msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].num) + return 0; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].base_vector_id; + irq_num = msix_info->msix_entries[local_vec_id].vector; + irq_name = msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].irq_name; + + snprintf(irq_name, NBL_STRING_NAME_LEN, "nbl_abnormal@pci:%s", + pci_name(NBL_COMMON_TO_PDEV(common))); + err = devm_request_irq(dev, irq_num, nbl_dev_clean_abnormal_event, 0, + irq_name, dev_mgt); + if (err) { + dev_err(dev, + "Request abnormal_irq irq handler failed err: %d\n", + err); + return err; + } + + return 0; +} + +static void nbl_dev_free_abnormal_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + u16 local_vec_id; + u32 irq_num; + + if (!msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].num) + return; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].base_vector_id; + irq_num = msix_info->msix_entries[local_vec_id].vector; + + devm_free_irq(dev, irq_num, dev_mgt); +} + +static int nbl_dev_enable_abnormal_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + u16 local_vec_id; + int err = 0; + + if (!msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].num) + return 0; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].base_vector_id; + err = serv_ops->enable_abnormal_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + local_vec_id, true); + + return err; +} + +static int nbl_dev_disable_abnormal_irq(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + u16 local_vec_id; + int err = 0; + + if (!msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].num) + return 0; + + local_vec_id = + msix_info->serv_info[NBL_MSIX_ABNORMAL_TYPE].base_vector_id; + err = serv_ops->enable_abnormal_irq(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + local_vec_id, false); + + return err; +} + +static int nbl_dev_configure_msix_map(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + u16 msix_not_net_num = 0; + u16 msix_net_num = msix_info->serv_info[NBL_MSIX_NET_TYPE].num; + bool mask_en = msix_info->serv_info[NBL_MSIX_NET_TYPE].hw_self_mask_en; + int err = 0; + int i; + + for (i = NBL_MSIX_NET_TYPE; i < NBL_MSIX_TYPE_MAX; i++) + msix_info->serv_info[i].base_vector_id = + msix_info->serv_info[i - 1].base_vector_id + + msix_info->serv_info[i - 1].num; + + for (i = NBL_MSIX_MAILBOX_TYPE; i < NBL_MSIX_TYPE_MAX; i++) { + if (i == NBL_MSIX_NET_TYPE) + continue; + + msix_not_net_num += msix_info->serv_info[i].num; + } + + err = serv_ops->configure_msix_map(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + msix_net_num, + msix_not_net_num, + mask_en); + + return err; +} + +static int nbl_dev_destroy_msix_map(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + int err = 0; + + err = serv_ops->destroy_msix_map(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + return err; +} + +static int nbl_dev_alloc_msix_entries(struct nbl_dev_mgt *dev_mgt, + u16 num_entries) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + u16 i; + + msix_info->msix_entries = + devm_kcalloc(NBL_DEV_MGT_TO_DEV(dev_mgt), num_entries, + sizeof(msix_info->msix_entries), GFP_KERNEL); + if (!msix_info->msix_entries) + return -ENOMEM; + + for (i = 0; i < num_entries; i++) + msix_info->msix_entries[i].entry = + serv_ops->get_msix_entry_id(priv, i); + + dev_info(NBL_DEV_MGT_TO_DEV(dev_mgt), "alloc msix entry: %u-%u.\n", + msix_info->msix_entries[0].entry, + msix_info->msix_entries[num_entries - 1].entry); + + return 0; +} + +static void nbl_dev_free_msix_entries(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + + devm_kfree(NBL_DEV_MGT_TO_DEV(dev_mgt), msix_info->msix_entries); + msix_info->msix_entries = NULL; +} + +static int nbl_dev_alloc_msix_intr(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + int needed = 0; + int err; + int i; + + for (i = 0; i < NBL_MSIX_TYPE_MAX; i++) + needed += msix_info->serv_info[i].num; + + err = nbl_dev_alloc_msix_entries(dev_mgt, (u16)needed); + if (err) { + pr_err("Allocate msix entries failed\n"); + return err; + } + + err = pci_enable_msix_range(NBL_COMMON_TO_PDEV(common), + msix_info->msix_entries, needed, needed); + if (err < 0) { + pr_err("pci_enable_msix_range failed, err = %d.\n", err); + goto enable_msix_failed; + } + + return needed; + +enable_msix_failed: + nbl_dev_free_msix_entries(dev_mgt); + return err; +} + +static void nbl_dev_free_msix_intr(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + + pci_disable_msix(NBL_COMMON_TO_PDEV(common)); + nbl_dev_free_msix_entries(dev_mgt); +} + +static int nbl_dev_init_interrupt_scheme(struct nbl_dev_mgt *dev_mgt) +{ + int err = 0; + + err = nbl_dev_alloc_msix_intr(dev_mgt); + if (err < 0) { + dev_err(NBL_DEV_MGT_TO_DEV(dev_mgt), + "Failed to enable MSI-X vectors\n"); + return err; + } + + return 0; +} + +static void nbl_dev_clear_interrupt_scheme(struct nbl_dev_mgt *dev_mgt) +{ + nbl_dev_free_msix_intr(dev_mgt); +} + /* ---------- Channel config ---------- */ static int nbl_dev_setup_chan_qinfo(struct nbl_dev_mgt *dev_mgt, u8 chan_type) { @@ -152,6 +697,43 @@ static int nbl_dev_remove_chan_queue(struct nbl_dev_mgt *dev_mgt, u8 chan_type) return ret; } +static bool nbl_dev_should_chan_keepalive(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + bool ret = true; + + ret = serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_TASK_KEEP_ALIVE); + + return ret; +} + +static int nbl_dev_setup_chan_keepalive(struct nbl_dev_mgt *dev_mgt, + u8 chan_type) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + void *priv = NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt); + u16 dest_func_id = NBL_COMMON_TO_MGT_PF(common); + + if (!nbl_dev_should_chan_keepalive(dev_mgt)) + return 0; + + if (chan_type != NBL_CHAN_TYPE_MAILBOX) + return -EOPNOTSUPP; + + dest_func_id = + serv_ops->get_function_id(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_COMMON_TO_VSI_ID(common)); + + if (chan_ops->check_queue_exist(priv, chan_type)) + return chan_ops->setup_keepalive(priv, + dest_func_id, chan_type); + + return -ENOENT; +} + static void nbl_dev_remove_chan_keepalive(struct nbl_dev_mgt *dev_mgt, u8 chan_type) { @@ -182,8 +764,21 @@ static void nbl_dev_clean_mailbox_task(struct work_struct *work) struct nbl_dev_mgt *dev_mgt = common_dev->dev_mgt; struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); - chan_ops->clean_queue_subtask(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), - NBL_CHAN_TYPE_MAILBOX); + chan_ops->clean_queue_subtask(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_TYPE_MAILBOX); +} + +static int nbl_dev_clean_mailbox_schedule(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_common *common_dev = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_dev_ctrl *ctrl_dev = NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + + if (ctrl_dev) + queue_work(ctrl_dev->ctrl_dev_wq1, &common_dev->clean_mbx_task); + else + nbl_common_queue_work(&common_dev->clean_mbx_task, false); + + return 0; } static void nbl_dev_prepare_reset_task(struct work_struct *work) @@ -232,6 +827,11 @@ static void nbl_dev_clean_adminq_task(struct work_struct *work) NBL_CHAN_TYPE_ADMINQ); } +static void nbl_dev_clean_adminq_schedule(struct nbl_task_info *task_info) +{ + nbl_common_queue_work(&task_info->clean_adminq_task, true); +} + static void nbl_dev_fw_heartbeat_task(struct work_struct *work) { struct nbl_task_info *task_info = @@ -257,6 +857,39 @@ static void nbl_dev_fw_heartbeat_task(struct work_struct *work) static void nbl_dev_fw_reset_task(struct work_struct *work) { + struct delayed_work *delayed_work = to_delayed_work(work); + struct nbl_task_info *task_info = + container_of(delayed_work, struct nbl_task_info, fw_reset_task); + struct nbl_dev_mgt *dev_mgt = task_info->dev_mgt; + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + + if (serv_ops->check_fw_reset(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt))) { + dev_notice(NBL_COMMON_TO_DEV(common), "FW recovered"); + nbl_dev_disable_adminq_irq(dev_mgt); + nbl_dev_free_adminq_irq(dev_mgt, task_info); + + msleep(NBL_DEV_FW_RESET_WAIT_TIME); // wait adminq timeout + nbl_dev_remove_chan_queue(dev_mgt, NBL_CHAN_TYPE_ADMINQ); + nbl_dev_setup_chan_qinfo(dev_mgt, NBL_CHAN_TYPE_ADMINQ); + nbl_dev_setup_chan_queue(dev_mgt, NBL_CHAN_TYPE_ADMINQ); + nbl_dev_request_adminq_irq(dev_mgt, task_info); + nbl_dev_enable_adminq_irq(dev_mgt); + + chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_ABNORMAL, + NBL_CHAN_TYPE_ADMINQ, false); + + if (NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt)) { + nbl_dev_get_port_attributes(dev_mgt); + nbl_dev_enable_port(dev_mgt, true); + } + task_info->fw_resetting = false; + return; + } + + nbl_common_q_dwork(delayed_work, MSEC_PER_SEC, true); } static void nbl_dev_adapt_desc_gother_task(struct work_struct *work) @@ -318,6 +951,30 @@ static void nbl_dev_ctrl_task_timer(struct timer_list *t) nbl_dev_ctrl_task_schedule(task_info); } +static void nbl_dev_ctrl_task_start(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_ctrl *ctrl_dev = NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + struct nbl_task_info *task_info = NBL_DEV_CTRL_TO_TASK_INFO(ctrl_dev); + + if (!task_info->timer_setup) + return; + + mod_timer(&task_info->serv_timer, + round_jiffies(jiffies + task_info->serv_timer_period)); +} + +static void nbl_dev_ctrl_task_stop(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_dev_ctrl *ctrl_dev = NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + struct nbl_task_info *task_info = NBL_DEV_CTRL_TO_TASK_INFO(ctrl_dev); + + if (!task_info->timer_setup) + return; + + timer_delete_sync(&task_info->serv_timer); + task_info->timer_setup = false; +} + static void nbl_dev_chan_notify_flr_resp(void *priv, u16 src_id, u16 msg_id, void *data, u32 data_len) { @@ -842,6 +1499,33 @@ static void nbl_dev_netdev_get_stats64(struct net_device *netdev, serv_ops->get_stats64(netdev, stats); } +static void nbl_dev_netdev_set_rx_mode(struct net_device *netdev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->set_rx_mode(netdev); +} + +static void nbl_dev_netdev_change_rx_flags(struct net_device *netdev, int flag) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->change_rx_flags(netdev, flag); +} + +static int nbl_dev_netdev_set_mac(struct net_device *netdev, void *p) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->set_mac(netdev, p); +} + static int nbl_dev_netdev_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) { @@ -862,14 +1546,83 @@ static int nbl_dev_netdev_rx_kill_vid(struct net_device *netdev, __be16 proto, return serv_ops->rx_kill_vid(netdev, proto, vid); } +static int nbl_dev_netdev_set_features(struct net_device *netdev, + netdev_features_t features) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->set_features(netdev, features); +} + +static netdev_features_t +nbl_dev_netdev_features_check(struct sk_buff *skb, struct net_device *netdev, + netdev_features_t features) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->features_check(skb, netdev, features); +} + +static void nbl_dev_netdev_tx_timeout(struct net_device *netdev, u32 txqueue) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->tx_timeout(netdev, txqueue); +} + +static u16 nbl_dev_netdev_select_queue(struct net_device *netdev, + struct sk_buff *skb, + struct net_device *sb_dev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->select_queue(netdev, skb, sb_dev); +} + +static int nbl_dev_netdev_change_mtu(struct net_device *netdev, int new_mtu) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->change_mtu(netdev, new_mtu); +} + +static int nbl_dev_ndo_get_phys_port_name(struct net_device *netdev, char *name, + size_t len) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->get_phys_port_name(netdev, name, len); +} + static const struct net_device_ops netdev_ops_leonis_pf = { .ndo_open = nbl_dev_netdev_open, .ndo_stop = nbl_dev_netdev_stop, .ndo_start_xmit = nbl_dev_start_xmit, .ndo_validate_addr = eth_validate_addr, .ndo_get_stats64 = nbl_dev_netdev_get_stats64, + .ndo_set_rx_mode = nbl_dev_netdev_set_rx_mode, + .ndo_change_rx_flags = nbl_dev_netdev_change_rx_flags, + .ndo_set_mac_address = nbl_dev_netdev_set_mac, .ndo_vlan_rx_add_vid = nbl_dev_netdev_rx_add_vid, .ndo_vlan_rx_kill_vid = nbl_dev_netdev_rx_kill_vid, + .ndo_set_features = nbl_dev_netdev_set_features, + .ndo_features_check = nbl_dev_netdev_features_check, + .ndo_tx_timeout = nbl_dev_netdev_tx_timeout, + .ndo_select_queue = nbl_dev_netdev_select_queue, + .ndo_change_mtu = nbl_dev_netdev_change_mtu, + .ndo_get_phys_port_name = nbl_dev_ndo_get_phys_port_name, }; @@ -879,9 +1632,15 @@ static const struct net_device_ops netdev_ops_leonis_vf = { .ndo_start_xmit = nbl_dev_start_xmit, .ndo_validate_addr = eth_validate_addr, .ndo_get_stats64 = nbl_dev_netdev_get_stats64, + .ndo_set_rx_mode = nbl_dev_netdev_set_rx_mode, + .ndo_set_mac_address = nbl_dev_netdev_set_mac, .ndo_vlan_rx_add_vid = nbl_dev_netdev_rx_add_vid, .ndo_vlan_rx_kill_vid = nbl_dev_netdev_rx_kill_vid, - + .ndo_features_check = nbl_dev_netdev_features_check, + .ndo_tx_timeout = nbl_dev_netdev_tx_timeout, + .ndo_select_queue = nbl_dev_netdev_select_queue, + .ndo_change_mtu = nbl_dev_netdev_change_mtu, + .ndo_get_phys_port_name = nbl_dev_ndo_get_phys_port_name, }; static int nbl_dev_setup_netops_leonis(void *priv, struct net_device *netdev, @@ -901,6 +1660,80 @@ static int nbl_dev_setup_netops_leonis(void *priv, struct net_device *netdev, return 0; } +static void nbl_dev_remove_netops(struct net_device *netdev) +{ + netdev->netdev_ops = NULL; +} + +static void nbl_dev_set_eth_mac_addr(struct nbl_dev_mgt *dev_mgt, + struct net_device *netdev) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + u8 mac[ETH_ALEN]; + + ether_addr_copy(mac, netdev->dev_addr); + serv_ops->set_eth_mac_addr(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), mac, + NBL_COMMON_TO_ETH_ID(common)); +} + +static int nbl_dev_cfg_netdev(struct net_device *netdev, + struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, + struct nbl_register_net_result *register_result) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_net_ops *net_dev_ops = + NBL_DEV_MGT_TO_NETDEV_OPS(dev_mgt); + u64 vlan_features = 0; + int ret = 0; + + if (param->pci_using_dac) + netdev->features |= NETIF_F_HIGHDMA; + netdev->watchdog_timeo = 5 * HZ; + + vlan_features = register_result->vlan_features ? + register_result->vlan_features : + register_result->features; + netdev->hw_features |= + nbl_features_to_netdev_features(register_result->hw_features); + netdev->features |= + nbl_features_to_netdev_features(register_result->features); + netdev->vlan_features |= nbl_features_to_netdev_features(vlan_features); + + netdev->priv_flags |= IFF_UNICAST_FLT; + + SET_DEV_MIN_MTU(netdev, ETH_MIN_MTU); + SET_DEV_MAX_MTU(netdev, register_result->max_mtu); + netdev->mtu = min_t(u16, register_result->max_mtu, NBL_DEFAULT_MTU); + serv_ops->change_mtu(netdev, netdev->mtu); + + if (is_valid_ether_addr(register_result->mac)) + eth_hw_addr_set(netdev, register_result->mac); + else + eth_hw_addr_random(netdev); + + ether_addr_copy(netdev->perm_addr, netdev->dev_addr); + + netdev->needed_headroom = + serv_ops->get_tx_headroom(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + + ret = net_dev_ops->setup_netdev_ops(dev_mgt, netdev, param); + if (ret) + goto set_ops_fail; + + nbl_dev_set_eth_mac_addr(dev_mgt, netdev); + + return 0; +set_ops_fail: + return ret; +} + +static void nbl_dev_reset_netdev(struct net_device *netdev) +{ + nbl_dev_remove_netops(netdev); +} + static int nbl_dev_register_net(struct nbl_dev_mgt *dev_mgt, struct nbl_register_net_result *register_result) { @@ -1020,6 +1853,78 @@ static void nbl_dev_vsi_common_remove(struct nbl_dev_mgt *dev_mgt, { } +static int nbl_dev_vsi_common_start(struct nbl_dev_mgt *dev_mgt, + struct net_device *netdev, + struct nbl_dev_vsi *vsi) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + int ret; + + vsi->napi_netdev = netdev; + + ret = serv_ops->setup_q2vsi(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); + if (ret) { + dev_err(dev, "Setup q2vsi failed\n"); + goto set_q2vsi_fail; + } + + ret = serv_ops->setup_rss(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); + if (ret) { + dev_err(dev, "Setup rss failed\n"); + goto set_rss_fail; + } + + ret = serv_ops->setup_rss_indir(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); + if (ret) { + dev_err(dev, "Setup rss indir failed\n"); + goto setup_rss_indir_fail; + } + + if (vsi->use_independ_irq) { + ret = serv_ops->enable_napis(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->index); + if (ret) { + dev_err(dev, "Enable napis failed\n"); + goto enable_napi_fail; + } + } + + ret = serv_ops->init_tx_rate(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); + if (ret) { + dev_err(dev, "init tx_rate failed\n"); + goto init_tx_rate_fail; + } + + return 0; + +init_tx_rate_fail: + serv_ops->disable_napis(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->index); +enable_napi_fail: +setup_rss_indir_fail: + serv_ops->remove_rss(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id); +set_rss_fail: + serv_ops->remove_q2vsi(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id); +set_q2vsi_fail: + return ret; +} + +static void nbl_dev_vsi_common_stop(struct nbl_dev_mgt *dev_mgt, + struct nbl_dev_vsi *vsi) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + if (vsi->use_independ_irq) + serv_ops->disable_napis(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->index); + serv_ops->remove_rss(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id); + serv_ops->remove_q2vsi(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id); +} + static int nbl_dev_vsi_data_register(struct nbl_dev_mgt *dev_mgt, struct nbl_init_param *param, void *vsi_data) @@ -1056,35 +1961,172 @@ static void nbl_dev_vsi_data_remove(struct nbl_dev_mgt *dev_mgt, void *vsi_data) nbl_dev_vsi_common_remove(dev_mgt, vsi); } -static int nbl_dev_vsi_ctrl_register(struct nbl_dev_mgt *dev_mgt, - struct nbl_init_param *param, - void *vsi_data) +static int nbl_dev_vsi_data_start(void *dev_priv, struct net_device *netdev, + void *vsi_data) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)dev_priv; + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + int ret; + u16 vid; + + vid = vsi->register_result.vlan_tci & VLAN_VID_MASK; + ret = serv_ops->start_net_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + netdev, vsi->vsi_id, vid, + vsi->register_result.trusted); + if (ret) { + dev_err(dev, "Set netdev flow table failed\n"); + goto set_flow_fail; + } + + if (!NBL_COMMON_TO_VF_CAP(common)) { + ret = serv_ops->set_lldp_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); + if (ret) { + dev_err(dev, "Set netdev lldp flow failed\n"); + goto set_lldp_fail; + } + vsi->feature.has_lldp = true; + } + + ret = nbl_dev_vsi_common_start(dev_mgt, netdev, vsi); + if (ret) { + dev_err(dev, "Vsi common start failed\n"); + goto common_start_fail; + } + + return 0; + +common_start_fail: + if (!NBL_COMMON_TO_VF_CAP(common)) + serv_ops->remove_lldp_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); +set_lldp_fail: + serv_ops->stop_net_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id); +set_flow_fail: + return ret; +} + +static void nbl_dev_vsi_data_stop(void *dev_priv, void *vsi_data) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)dev_priv; + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + nbl_dev_vsi_common_stop(dev_mgt, vsi); + + if (!NBL_COMMON_TO_VF_CAP(common)) { + serv_ops->remove_lldp_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); + vsi->feature.has_lldp = false; + } + + serv_ops->stop_net_flow(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->vsi_id); +} + +static int nbl_dev_vsi_data_netdev_build(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, + struct net_device *netdev, + void *vsi_data) +{ + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + vsi->netdev = netdev; + return nbl_dev_cfg_netdev(netdev, dev_mgt, param, + &vsi->register_result); +} + +static void nbl_dev_vsi_data_netdev_destroy(struct nbl_dev_mgt *dev_mgt, + void *vsi_data) +{ + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + nbl_dev_reset_netdev(vsi->netdev); +} + +static int nbl_dev_vsi_ctrl_register(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, + void *vsi_data) +{ + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + serv_ops->get_rep_queue_info(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + &vsi->queue_num, &vsi->queue_size); + + nbl_debug(common, "Ctrl vsi register, queue_num %d, queue_size %d", + vsi->queue_num, vsi->queue_size); + return 0; +} + +static int nbl_dev_vsi_ctrl_setup(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, void *vsi_data) +{ + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + return nbl_dev_vsi_common_setup(dev_mgt, param, vsi); +} + +static void nbl_dev_vsi_ctrl_remove(struct nbl_dev_mgt *dev_mgt, void *vsi_data) +{ + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + + nbl_dev_vsi_common_remove(dev_mgt, vsi); +} + +static int nbl_dev_vsi_ctrl_start(void *dev_priv, struct net_device *netdev, + void *vsi_data) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)dev_priv; + struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + int ret; + + ret = nbl_dev_vsi_common_start(dev_mgt, netdev, vsi); + if (ret) + goto start_fail; + + /* For ctrl vsi, open it after create, for that + *we don't have ndo_open ops. + */ + ret = serv_ops->vsi_open(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), netdev, + vsi->index, vsi->queue_num, 1); + if (ret) + goto open_fail; + + return ret; + +open_fail: + nbl_dev_vsi_common_stop(dev_mgt, vsi); +start_fail: + return ret; +} + +static void nbl_dev_vsi_ctrl_stop(void *dev_priv, void *vsi_data) { - struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)dev_priv; struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); - serv_ops->get_rep_queue_info(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), - &vsi->queue_num, &vsi->queue_size); - - nbl_debug(common, "Ctrl vsi register, queue_num %d, queue_size %d", - vsi->queue_num, vsi->queue_size); - return 0; + serv_ops->vsi_stop(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), vsi->index); + nbl_dev_vsi_common_stop(dev_mgt, vsi); } -static int nbl_dev_vsi_ctrl_setup(struct nbl_dev_mgt *dev_mgt, - struct nbl_init_param *param, void *vsi_data) +static int nbl_dev_vsi_ctrl_netdev_build(struct nbl_dev_mgt *dev_mgt, + struct nbl_init_param *param, + struct net_device *netdev, + void *vsi_data) { - struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; - - return nbl_dev_vsi_common_setup(dev_mgt, param, vsi); + return 0; } -static void nbl_dev_vsi_ctrl_remove(struct nbl_dev_mgt *dev_mgt, void *vsi_data) +static void nbl_dev_vsi_ctrl_netdev_destroy(struct nbl_dev_mgt *dev_mgt, + void *vsi_data) { - struct nbl_dev_vsi *vsi = (struct nbl_dev_vsi *)vsi_data; - - nbl_dev_vsi_common_remove(dev_mgt, vsi); } static struct nbl_dev_vsi_tbl vsi_tbl[NBL_VSI_MAX] = { @@ -1093,6 +2135,10 @@ static struct nbl_dev_vsi_tbl vsi_tbl[NBL_VSI_MAX] = { .register_vsi = nbl_dev_vsi_data_register, .setup = nbl_dev_vsi_data_setup, .remove = nbl_dev_vsi_data_remove, + .start = nbl_dev_vsi_data_start, + .stop = nbl_dev_vsi_data_stop, + .netdev_build = nbl_dev_vsi_data_netdev_build, + .netdev_destroy = nbl_dev_vsi_data_netdev_destroy, }, .vf_support = true, .only_nic_support = false, @@ -1105,6 +2151,10 @@ static struct nbl_dev_vsi_tbl vsi_tbl[NBL_VSI_MAX] = { .register_vsi = nbl_dev_vsi_ctrl_register, .setup = nbl_dev_vsi_ctrl_setup, .remove = nbl_dev_vsi_ctrl_remove, + .start = nbl_dev_vsi_ctrl_start, + .stop = nbl_dev_vsi_ctrl_stop, + .netdev_build = nbl_dev_vsi_ctrl_netdev_build, + .netdev_destroy = nbl_dev_vsi_ctrl_netdev_destroy, }, .vf_support = false, .only_nic_support = true, @@ -1423,6 +2473,532 @@ void nbl_dev_remove(void *p) nbl_dev_remove_dev_mgt(common, dev_mgt); } +static void nbl_dev_notify_dev_prepare_reset(struct nbl_dev_mgt *dev_mgt, + enum nbl_reset_event event) +{ + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_chan_send_info chan_send; + unsigned long cur_func = 0; + unsigned long next_func = 0; + unsigned long *func_bitmap; + int func_num = 0; + + func_bitmap = devm_kcalloc(NBL_COMMON_TO_DEV(common), + BITS_TO_LONGS(NBL_MAX_FUNC), sizeof(long), + GFP_KERNEL); + if (!func_bitmap) + return; + + serv_ops->get_active_func_bitmaps(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + func_bitmap, NBL_MAX_FUNC); + memset(dev_mgt->ctrl_dev->task_info.reset_status, 0, + sizeof(dev_mgt->ctrl_dev->task_info.reset_status)); + /* clear ctrl_dev func_id, and do it last */ + clear_bit(NBL_COMMON_TO_MGT_PF(common), func_bitmap); + + cur_func = NBL_COMMON_TO_MGT_PF(common); + while (1) { + next_func = + find_next_bit(func_bitmap, NBL_MAX_FUNC, cur_func + 1); + if (next_func >= NBL_MAX_FUNC) + break; + + cur_func = next_func; + dev_mgt->ctrl_dev->task_info.reset_status[cur_func] = + NBL_RESET_SEND; + NBL_CHAN_SEND(chan_send, cur_func, + NBL_CHAN_MSG_NOTIFY_RESET_EVENT, &event, + sizeof(event), NULL, 0, 0); + chan_ops->send_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + &chan_send); + func_num++; + if (func_num >= NBL_DEV_BATCH_RESET_FUNC_NUM) { + usleep_range(NBL_DEV_BATCH_RESET_USEC, + NBL_DEV_BATCH_RESET_USEC * 2); + func_num = 0; + } + } + + if (func_num) + usleep_range(NBL_DEV_BATCH_RESET_USEC, + NBL_DEV_BATCH_RESET_USEC * 2); + + /* ctrl dev need proc last, basecase reset task will close mailbox */ + dev_mgt->ctrl_dev->task_info.reset_status[common->mgt_pf] = + NBL_RESET_SEND; + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_NOTIFY_RESET_EVENT, NULL, 0, NULL, 0, 0); + chan_ops->send_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), &chan_send); + usleep_range(NBL_DEV_BATCH_RESET_USEC, NBL_DEV_BATCH_RESET_USEC * 2); + + cur_func = NBL_COMMON_TO_MGT_PF(common); + while (1) { + if (dev_mgt->ctrl_dev->task_info.reset_status[cur_func] == + NBL_RESET_SEND) + nbl_info(common, "func %ld reset failed", cur_func); + + next_func = + find_next_bit(func_bitmap, NBL_MAX_FUNC, cur_func + 1); + if (next_func >= NBL_MAX_FUNC) + break; + + cur_func = next_func; + } + + devm_kfree(NBL_COMMON_TO_DEV(common), func_bitmap); +} + static void nbl_dev_handle_fatal_err(struct nbl_dev_mgt *dev_mgt) { + struct nbl_adapter *adapter = + NBL_NETDEV_TO_ADAPTER(dev_mgt->net_dev->netdev); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_chan_param_notify_fw_reset_info fw_reset = {0}; + struct nbl_chan_send_info chan_send; + + if (test_and_set_bit(NBL_FATAL_ERR, adapter->state)) { + nbl_info(common, "dev in fatal_err status already."); + return; + } + + nbl_dev_disable_abnormal_irq(dev_mgt); + nbl_dev_ctrl_task_stop(dev_mgt); + nbl_dev_notify_dev_prepare_reset(dev_mgt, NBL_HW_FATAL_ERR_EVENT); + + /* notify emp shutdown dev */ + fw_reset.type = NBL_FW_HIGH_TEMP_RESET; + NBL_CHAN_SEND(chan_send, NBL_CHAN_ADMINQ_FUNCTION_ID, + NBL_CHAN_MSG_ADMINQ_NOTIFY_FW_RESET, &fw_reset, + sizeof(fw_reset), NULL, 0, 0); + chan_ops->send_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), &chan_send); + + chan_ops->set_queue_state(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_ABNORMAL, NBL_CHAN_TYPE_ADMINQ, + true); + serv_ops->set_hw_status(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_HW_FATAL_ERR); + nbl_info(common, "dev in fatal_err status."); +} + +/* ---------- Dev start process ---------- */ +static int nbl_dev_start_ctrl_dev(struct nbl_adapter *adapter, + struct nbl_init_param *param) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_ctrl *ctrl_dev = NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt); + int err; + + err = nbl_dev_request_abnormal_irq(dev_mgt); + if (err) + goto abnormal_request_irq_err; + + err = nbl_dev_enable_abnormal_irq(dev_mgt); + if (err) + goto enable_abnormal_irq_err; + + err = nbl_dev_request_adminq_irq(dev_mgt, + &ctrl_dev->task_info); + if (err) + goto request_adminq_irq_err; + + err = nbl_dev_enable_adminq_irq(dev_mgt); + if (err) + goto enable_adminq_irq_err; + + nbl_dev_get_port_attributes(dev_mgt); + nbl_dev_enable_port(dev_mgt, true); + nbl_dev_ctrl_task_start(dev_mgt); + + return 0; + +enable_adminq_irq_err: + nbl_dev_free_adminq_irq(dev_mgt, + &ctrl_dev->task_info); +request_adminq_irq_err: + nbl_dev_disable_abnormal_irq(dev_mgt); +enable_abnormal_irq_err: + nbl_dev_free_abnormal_irq(dev_mgt); +abnormal_request_irq_err: + return err; +} + +static void nbl_dev_stop_ctrl_dev(struct nbl_adapter *adapter) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + + if (!NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt)) + return; + + nbl_dev_ctrl_task_stop(dev_mgt); + nbl_dev_enable_port(dev_mgt, false); + nbl_dev_disable_adminq_irq(dev_mgt); + nbl_dev_free_adminq_irq(dev_mgt, + &NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt)->task_info); + nbl_dev_disable_abnormal_irq(dev_mgt); + nbl_dev_free_abnormal_irq(dev_mgt); +} + +static void nbl_dev_chan_notify_link_state_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct net_device *netdev = (struct net_device *)priv; + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_chan_param_notify_link_state *link_info; + + link_info = (struct nbl_chan_param_notify_link_state *)data; + + serv_ops->set_netdev_carrier_state(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + netdev, link_info->link_state); +} + +static void nbl_dev_register_link_state_chan_msg(struct nbl_dev_mgt *dev_mgt, + struct net_device *netdev) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + + if (!chan_ops->check_queue_exist(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_TYPE_MAILBOX)) + return; + + chan_ops->register_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_MSG_NOTIFY_LINK_STATE, + nbl_dev_chan_notify_link_state_resp, netdev); +} + +static void nbl_dev_chan_notify_reset_event_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)priv; + enum nbl_reset_event event = *(enum nbl_reset_event *)data; + + dev_mgt->common_dev->reset_task.event = event; + nbl_common_queue_work(&dev_mgt->common_dev->reset_task.task, false); +} + +static void nbl_dev_chan_ack_reset_event_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)priv; + + WRITE_ONCE(dev_mgt->ctrl_dev->task_info.reset_status[src_id], + NBL_RESET_DONE); +} + +static void nbl_dev_register_reset_event_chan_msg(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + + if (!chan_ops->check_queue_exist(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_TYPE_MAILBOX)) + return; + + chan_ops->register_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_MSG_NOTIFY_RESET_EVENT, + nbl_dev_chan_notify_reset_event_resp, dev_mgt); + chan_ops->register_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_MSG_ACK_RESET_EVENT, + nbl_dev_chan_ack_reset_event_resp, dev_mgt); +} + +int nbl_dev_setup_vf_config(void *p, int num_vfs) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->setup_vf_config(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + num_vfs, false); +} + +void nbl_dev_remove_vf_config(void *p) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + + return serv_ops->remove_vf_config(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +} + +static int nbl_dev_start_net_dev(struct nbl_adapter *adapter, + struct nbl_init_param *param) +{ + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_common *dev_common = NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt); + struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + + struct nbl_msix_info *msix_info = + NBL_DEV_COMMON_TO_MSIX_INFO(dev_common); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct net_device *netdev = net_dev->netdev; + struct nbl_netdev_priv *net_priv; + struct device *dev = NBL_DEV_MGT_TO_DEV(dev_mgt); + void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt); + struct nbl_ring_param ring_param = {0}; + struct nbl_dev_vsi *vsi; + u16 net_vector_id, queue_num; + int ret; + + vsi = nbl_dev_vsi_select(dev_mgt, NBL_VSI_DATA); + if (!vsi) + return -EFAULT; + + queue_num = vsi->queue_num; + netdev = alloc_etherdev_mqs(sizeof(struct nbl_netdev_priv), queue_num, + queue_num); + if (!netdev) { + dev_err(dev, "Alloc net device failed\n"); + ret = -ENOMEM; + goto alloc_netdev_fail; + } + + SET_NETDEV_DEV(netdev, dev); + net_priv = netdev_priv(netdev); + net_priv->adapter = adapter; + nbl_dev_set_netdev_priv(netdev, vsi); + + net_dev->netdev = netdev; + common->msg_enable = netif_msg_init(-1, DEFAULT_MSG_ENABLE); + serv_ops->set_mask_en(priv, 1); + + ring_param.tx_ring_num = net_dev->kernel_queue_num; + ring_param.rx_ring_num = net_dev->kernel_queue_num; + ring_param.queue_size = net_priv->queue_size; + ret = serv_ops->alloc_rings(priv, netdev, &ring_param); + if (ret) { + dev_err(dev, "Alloc rings failed\n"); + goto alloc_rings_fail; + } + + serv_ops->cpu_affinity_init(priv, + vsi->queue_num); + ret = serv_ops->setup_net_resource_mgt(priv, netdev, + vsi->register_result.vlan_proto, + vsi->register_result.vlan_tci, + vsi->register_result.rate); + if (ret) { + dev_err(dev, "setup net mgt failed\n"); + goto setup_net_mgt_fail; + } + + /* netdev build must before setup_txrx_queues. Because snoop check mac + * trust the mac if pf use ip link cfg the mac for vf. We judge the + * case will not permit accord queue has alloced when vf modify mac. + */ + ret = vsi->ops->netdev_build(dev_mgt, param, netdev, vsi); + if (ret) { + dev_err(dev, "Build netdev failed, selected vsi %d\n", + vsi->index); + goto build_netdev_fail; + } + + net_vector_id = msix_info->serv_info[NBL_MSIX_NET_TYPE].base_vector_id; + ret = serv_ops->setup_txrx_queues(priv, + vsi->vsi_id, net_dev->total_queue_num, + net_vector_id); + if (ret) { + dev_err(dev, "Set queue map failed\n"); + goto set_queue_fail; + } + + ret = serv_ops->init_hw_stats(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + if (ret) { + dev_err(dev, "init hw stats failed\n"); + goto init_hw_stats_fail; + } + + nbl_dev_register_link_state_chan_msg(dev_mgt, netdev); + nbl_dev_register_reset_event_chan_msg(dev_mgt); + + ret = vsi->ops->start(dev_mgt, netdev, vsi); + if (ret) { + dev_err(dev, "Start vsi failed, selected vsi %d\n", vsi->index); + goto start_vsi_fail; + } + + ret = nbl_dev_request_net_irq(dev_mgt); + if (ret) { + dev_err(dev, "request irq failed\n"); + goto request_irq_fail; + } + + netif_carrier_off(netdev); + + ret = register_netdev(netdev); + if (ret) { + dev_err(dev, "Register netdev failed\n"); + goto register_netdev_fail; + } + + if (!param->caps.is_vf) { + if (net_dev->total_vfs) { + ret = serv_ops->setup_vf_resource(priv, + net_dev->total_vfs); + if (ret) + goto setup_vf_res_fail; + } + } + + set_bit(NBL_DOWN, adapter->state); + + return 0; +setup_vf_res_fail: + unregister_netdev(netdev); +register_netdev_fail: + nbl_dev_free_net_irq(dev_mgt); +request_irq_fail: + vsi->ops->stop(dev_mgt, vsi); +start_vsi_fail: + serv_ops->remove_hw_stats(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +init_hw_stats_fail: + serv_ops->remove_txrx_queues(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); +set_queue_fail: + vsi->ops->netdev_destroy(dev_mgt, vsi); +build_netdev_fail: + serv_ops->remove_net_resource_mgt(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +setup_net_mgt_fail: + serv_ops->free_rings(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); +alloc_rings_fail: + free_netdev(netdev); +alloc_netdev_fail: + return ret; +} + +static void nbl_dev_stop_net_dev(struct nbl_adapter *adapter) +{ + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_dev_vsi *vsi; + struct net_device *netdev; + + if (!net_dev) + return; + + netdev = net_dev->netdev; + + vsi = net_dev->vsi_ctrl.vsi_list[NBL_VSI_DATA]; + if (!vsi) + return; + + if (!common->is_vf) + serv_ops->remove_vf_resource(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + + serv_ops->change_mtu(netdev, 0); + unregister_netdev(netdev); + rtnl_lock(); + netif_device_detach(netdev); + rtnl_unlock(); + + vsi->ops->netdev_destroy(dev_mgt, vsi); + vsi->ops->stop(dev_mgt, vsi); + + nbl_dev_free_net_irq(dev_mgt); + + serv_ops->remove_hw_stats(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + + serv_ops->remove_net_resource_mgt(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + serv_ops->remove_txrx_queues(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + vsi->vsi_id); + serv_ops->free_rings(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + + free_netdev(netdev); +} + +static int nbl_dev_start_common_dev(struct nbl_adapter *adapter, + struct nbl_init_param *param) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + int ret; + + ret = nbl_dev_configure_msix_map(dev_mgt); + if (ret) + goto config_msix_map_err; + + ret = nbl_dev_init_interrupt_scheme(dev_mgt); + if (ret) + goto init_interrupt_scheme_err; + + ret = nbl_dev_request_mailbox_irq(dev_mgt); + if (ret) + goto mailbox_request_irq_err; + + ret = nbl_dev_enable_mailbox_irq(dev_mgt); + if (ret) + goto enable_mailbox_irq_err; + nbl_dev_setup_chan_keepalive(dev_mgt, NBL_CHAN_TYPE_MAILBOX); + + return 0; +enable_mailbox_irq_err: + nbl_dev_free_mailbox_irq(dev_mgt); +mailbox_request_irq_err: + nbl_dev_clear_interrupt_scheme(dev_mgt); +init_interrupt_scheme_err: + nbl_dev_destroy_msix_map(dev_mgt); +config_msix_map_err: + return ret; +} + +static void nbl_dev_stop_common_dev(struct nbl_adapter *adapter) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + + nbl_dev_remove_chan_keepalive(dev_mgt, NBL_CHAN_TYPE_MAILBOX); + nbl_dev_free_mailbox_irq(dev_mgt); + nbl_dev_disable_mailbox_irq(dev_mgt); + nbl_dev_clear_interrupt_scheme(dev_mgt); + nbl_dev_destroy_msix_map(dev_mgt); +} + +int nbl_dev_start(void *p, struct nbl_init_param *param) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + int ret; + + ret = nbl_dev_start_common_dev(adapter, param); + if (ret) + goto start_common_dev_fail; + + if (param->caps.has_ctrl) { + ret = nbl_dev_start_ctrl_dev(adapter, param); + if (ret) + goto start_ctrl_dev_fail; + } + + ret = nbl_dev_start_net_dev(adapter, param); + if (ret) + goto start_net_dev_fail; + + return 0; + +start_net_dev_fail: + nbl_dev_stop_ctrl_dev(adapter); +start_ctrl_dev_fail: + nbl_dev_stop_common_dev(adapter); +start_common_dev_fail: + return ret; +} + +void nbl_dev_stop(void *p) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + + nbl_dev_stop_ctrl_dev(adapter); + nbl_dev_stop_net_dev(adapter); + nbl_dev_stop_common_dev(adapter); } diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c index 76a2a1513e2f..5118615c0dbe 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c @@ -15,6 +15,8 @@ static void nbl_serv_set_link_state(struct nbl_service_mgt *serv_mgt, struct net_device *netdev); +static int nbl_serv_update_default_vlan(struct nbl_service_mgt *serv_mgt, + u16 vid); static void nbl_serv_set_queue_param(struct nbl_serv_ring *ring, u16 desc_num, struct nbl_txrx_queue_param *param, @@ -154,6 +156,98 @@ static void nbl_serv_stop_rings(struct nbl_service_mgt *serv_mgt, disp_ops->stop_rx_ring(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), i); } +static int nbl_serv_set_tx_rings(struct nbl_serv_ring_mgt *ring_mgt, + struct net_device *netdev, struct device *dev) +{ + u16 ring_num = ring_mgt->tx_ring_num; + int i; + + ring_mgt->tx_rings = devm_kcalloc(dev, ring_num, + sizeof(*ring_mgt->tx_rings), + GFP_KERNEL); + if (!ring_mgt->tx_rings) + return -ENOMEM; + + for (i = 0; i < ring_num; i++) + ring_mgt->tx_rings[i].index = i; + + return 0; +} + +static void nbl_serv_remove_tx_ring(struct nbl_serv_ring_mgt *ring_mgt, + struct device *dev) +{ + devm_kfree(dev, ring_mgt->tx_rings); + ring_mgt->tx_rings = NULL; +} + +static int nbl_serv_set_rx_rings(struct nbl_serv_ring_mgt *ring_mgt, + struct net_device *netdev, struct device *dev) +{ + u16 ring_num = ring_mgt->rx_ring_num; + int i; + + ring_mgt->rx_rings = devm_kcalloc(dev, ring_num, + sizeof(*ring_mgt->rx_rings), + GFP_KERNEL); + if (!ring_mgt->rx_rings) + return -ENOMEM; + + for (i = 0; i < ring_num; i++) + ring_mgt->rx_rings[i].index = i; + + return 0; +} + +static void nbl_serv_remove_rx_ring(struct nbl_serv_ring_mgt *ring_mgt, + struct device *dev) +{ + devm_kfree(dev, ring_mgt->rx_rings); + ring_mgt->rx_rings = NULL; +} + +static int nbl_serv_set_vectors(struct nbl_service_mgt *serv_mgt, + struct net_device *netdev, struct device *dev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_resource_pt_ops *pt_ops = NBL_ADAPTER_TO_RES_PT_OPS(adapter); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + u16 ring_num = ring_mgt->rx_ring_num; + int i; + + ring_mgt->vectors = devm_kcalloc(dev, ring_num, + sizeof(*ring_mgt->vectors), + GFP_KERNEL); + if (!ring_mgt->vectors) + return -ENOMEM; + + for (i = 0; i < ring_num; i++) { + ring_mgt->vectors[i].nbl_napi = + disp_ops->get_vector_napi(p, i); + netif_napi_add(netdev, &ring_mgt->vectors[i].nbl_napi->napi, + pt_ops->napi_poll); + ring_mgt->vectors[i].netdev = netdev; + cpumask_clear(&ring_mgt->vectors[i].cpumask); + } + + return 0; +} + +static void nbl_serv_remove_vectors(struct nbl_serv_ring_mgt *ring_mgt, + struct device *dev) +{ + u16 ring_num = ring_mgt->rx_ring_num; + int i; + + for (i = 0; i < ring_num; i++) + netif_napi_del(&ring_mgt->vectors[i].nbl_napi->napi); + + devm_kfree(dev, ring_mgt->vectors); + ring_mgt->vectors = NULL; +} + static void nbl_serv_check_flow_table_spec(struct nbl_service_mgt *serv_mgt) { struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); @@ -175,6 +269,17 @@ static void nbl_serv_check_flow_table_spec(struct nbl_service_mgt *serv_mgt) } } +static bool nbl_serv_check_need_flow_rule(u8 *mac, u16 promisc) +{ + if (!is_multicast_ether_addr(mac) && (promisc & BIT(NBL_PROMISC))) + return false; + + if (is_multicast_ether_addr(mac) && (promisc & BIT(NBL_ALLMULTI))) + return false; + + return true; +} + static struct nbl_serv_vlan_node *nbl_serv_alloc_vlan_node(void) { struct nbl_serv_vlan_node *vlan_node = NULL; @@ -196,6 +301,87 @@ static void nbl_serv_free_vlan_node(struct nbl_serv_vlan_node *vlan_node) kfree(vlan_node); } +static struct nbl_serv_submac_node *nbl_serv_alloc_submac_node(void) +{ + struct nbl_serv_submac_node *submac_node = NULL; + + submac_node = kzalloc(sizeof(*submac_node), GFP_ATOMIC); + if (!submac_node) + return NULL; + + INIT_LIST_HEAD(&submac_node->node); + submac_node->effective = 0; + + return submac_node; +} + +static void nbl_serv_free_submac_node(struct nbl_serv_submac_node *submac_node) +{ + kfree(submac_node); +} + +static int +nbl_serv_update_submac_node_effective(struct nbl_service_mgt *serv_mgt, + struct nbl_serv_submac_node *submac_node, + bool effective, u16 vsi) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct net_device *dev = net_resource_mgt->netdev; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + struct nbl_serv_vlan_node *vlan_node; + bool force_promisc = 0; + int ret = 0; + + if (submac_node->effective == effective) + return 0; + + list_for_each_entry(vlan_node, &flow_mgt->vlan_list, node) { + if (!vlan_node->sub_mac_effective) + continue; + + if (effective) { + ret = disp_ops->add_macvlan(p, + submac_node->mac, + vlan_node->vid, vsi); + if (ret) + goto del_macvlan_node; + } else { + disp_ops->del_macvlan(p, + submac_node->mac, + vlan_node->vid, vsi); + } + } + submac_node->effective = effective; + if (effective) + flow_mgt->active_submac_list++; + else + flow_mgt->active_submac_list--; + + return 0; + +del_macvlan_node: + list_for_each_entry(vlan_node, &flow_mgt->vlan_list, node) { + if (vlan_node->sub_mac_effective) + disp_ops->del_macvlan(p, + submac_node->mac, + vlan_node->vid, vsi); + } + + if (ret) { + force_promisc = 1; + if (flow_mgt->force_promisc ^ force_promisc) { + flow_mgt->force_promisc = force_promisc; + flow_mgt->pending_async_work = 1; + netdev_info(dev, "Reached MAC filter limit, forcing promisc/allmuti mode\n"); + } + } + + return 0; +} + static int nbl_serv_update_vlan_node_effective(struct nbl_service_mgt *serv_mgt, struct nbl_serv_vlan_node *vlan_node, @@ -279,6 +465,193 @@ nbl_serv_update_vlan_node_effective(struct nbl_service_mgt *serv_mgt, return ret; } +static void nbl_serv_del_submac_node(struct nbl_service_mgt *serv_mgt, u8 *mac, + u16 vsi) +{ + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_submac_node *mac_node, *mac_node_safe; + struct list_head *submac_head; + + if (is_multicast_ether_addr(mac)) + submac_head = &flow_mgt->submac_list[NBL_SUBMAC_MULTI]; + else + submac_head = &flow_mgt->submac_list[NBL_SUBMAC_UNICAST]; + + list_for_each_entry_safe(mac_node, mac_node_safe, submac_head, + node) + if (ether_addr_equal(mac_node->mac, mac)) { + if (mac_node->effective) + nbl_serv_update_submac_node_effective(serv_mgt, + mac_node, + 0, vsi); + list_del(&mac_node->node); + flow_mgt->submac_list_cnt--; + if (is_multicast_ether_addr(mac_node->mac)) + flow_mgt->multi_mac_cnt--; + else + flow_mgt->unicast_mac_cnt--; + nbl_serv_free_submac_node(mac_node); + break; + } +} + +static int nbl_serv_add_submac_node(struct nbl_service_mgt *serv_mgt, u8 *mac, + u16 vsi, u16 promisc) +{ + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_submac_node *submac_node; + struct list_head *submac_head; + + if (is_multicast_ether_addr(mac)) + submac_head = &flow_mgt->submac_list[NBL_SUBMAC_MULTI]; + else + submac_head = &flow_mgt->submac_list[NBL_SUBMAC_UNICAST]; + + list_for_each_entry(submac_node, submac_head, node) { + if (ether_addr_equal(submac_node->mac, mac)) + return 0; + } + + submac_node = nbl_serv_alloc_submac_node(); + if (!submac_node) + return -ENOMEM; + + submac_node->effective = 0; + ether_addr_copy(submac_node->mac, mac); + if (nbl_serv_check_need_flow_rule(mac, promisc) && + (flow_mgt->trusted_en || + flow_mgt->active_submac_list < NBL_NO_TRUST_MAX_MAC)) { + nbl_serv_update_submac_node_effective(serv_mgt, submac_node, 1, + vsi); + } + + list_add(&submac_node->node, submac_head); + flow_mgt->submac_list_cnt++; + if (is_multicast_ether_addr(mac)) + flow_mgt->multi_mac_cnt++; + else + flow_mgt->unicast_mac_cnt++; + + return 0; +} + +static void nbl_serv_update_mcast_submac(struct nbl_service_mgt *serv_mgt, + bool multi_effective, + bool unicast_effective, u16 vsi) +{ + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_submac_node *submac_node; + + list_for_each_entry(submac_node, + &flow_mgt->submac_list[NBL_SUBMAC_MULTI], node) + nbl_serv_update_submac_node_effective(serv_mgt, submac_node, + multi_effective, vsi); + + list_for_each_entry(submac_node, + &flow_mgt->submac_list[NBL_SUBMAC_UNICAST], node) + nbl_serv_update_submac_node_effective(serv_mgt, submac_node, + unicast_effective, vsi); +} + +static void nbl_serv_update_promisc_vlan(struct nbl_service_mgt *serv_mgt, + bool effective, u16 vsi) +{ + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_vlan_node *vlan_node; + + list_for_each_entry(vlan_node, &flow_mgt->vlan_list, node) + nbl_serv_update_vlan_node_effective(serv_mgt, vlan_node, + effective, vsi); +} + +static void nbl_serv_del_all_vlans(struct nbl_service_mgt *serv_mgt) +{ + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + struct nbl_serv_vlan_node *vlan_node, *vlan_node_safe; + + list_for_each_entry_safe(vlan_node, vlan_node_safe, + &flow_mgt->vlan_list, node) { + if (vlan_node->primary_mac_effective) + disp_ops->del_macvlan(p, flow_mgt->mac, + vlan_node->vid, + NBL_COMMON_TO_VSI_ID(common)); + + list_del(&vlan_node->node); + nbl_serv_free_vlan_node(vlan_node); + } +} + +static void nbl_serv_del_all_submacs(struct nbl_service_mgt *serv_mgt, u16 vsi) +{ + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_submac_node *submac_node, *submac_node_safe; + int i; + + for (i = 0; i < NBL_SUBMAC_MAX; i++) + list_for_each_entry_safe(submac_node, submac_node_safe, + &flow_mgt->submac_list[i], node) { + nbl_serv_update_submac_node_effective(serv_mgt, + submac_node, + 0, vsi); + list_del(&submac_node->node); + flow_mgt->submac_list_cnt--; + if (is_multicast_ether_addr(submac_node->mac)) + flow_mgt->multi_mac_cnt--; + else + flow_mgt->unicast_mac_cnt--; + nbl_serv_free_submac_node(submac_node); + } +} + +void nbl_serv_cpu_affinity_init(void *priv, u16 rings_num) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct device *dev = NBL_COMMON_TO_DEV(common); + int i; + + for (i = 0; i < rings_num; i++) { + cpumask_set_cpu(cpumask_local_spread(i, dev->numa_node), + &ring_mgt->vectors[i].cpumask); + netif_set_xps_queue(ring_mgt->vectors[i].netdev, + &ring_mgt->vectors[i].cpumask, i); + } +} + +static int nbl_serv_ipv6_exthdr_num(struct sk_buff *skb, int start, u8 nexthdr) +{ + struct ipv6_opt_hdr _hdr, *hp; + int exthdr_num = 0; + unsigned int hdrlen; + + while (ipv6_ext_hdr(nexthdr)) { + if (nexthdr == NEXTHDR_NONE) + return -1; + + hp = skb_header_pointer(skb, start, sizeof(_hdr), &_hdr); + if (!hp) + return -1; + + exthdr_num++; + + if (nexthdr == NEXTHDR_FRAGMENT) + hdrlen = 8; + else if (nexthdr == NEXTHDR_AUTH) + hdrlen = ipv6_authlen(hp); + else + hdrlen = ipv6_optlen(hp); + + nexthdr = hp->nexthdr; + start += hdrlen; + } + + return exthdr_num; +} + static void nbl_serv_set_sfp_state(void *priv, struct net_device *netdev, u8 eth_id, bool open, bool is_force) { @@ -470,6 +843,24 @@ int nbl_serv_vsi_stop(void *priv, u16 vsi_index) return 0; } +static struct nbl_mac_filter *nbl_add_filter(struct list_head *head, + const u8 *macaddr) +{ + struct nbl_mac_filter *f; + + if (!macaddr) + return NULL; + + f = kzalloc(sizeof(*f), GFP_ATOMIC); + if (!f) + return f; + + ether_addr_copy(f->macaddr, macaddr); + list_add_tail(&f->list, head); + + return f; +} + static int nbl_serv_abnormal_event_to_queue(int event_type) { switch (event_type) { @@ -482,6 +873,16 @@ static int nbl_serv_abnormal_event_to_queue(int event_type) } } +static int nbl_serv_stop_abnormal_sw_queue(struct nbl_service_mgt *serv_mgt, + u16 local_queue_id, int type) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + return disp_ops->stop_abnormal_sw_queue(p, + local_queue_id, type); +} + static int nbl_serv_chan_stop_abnormal_sw_queue_req(struct nbl_service_mgt *serv_mgt, u16 local_queue_id, u16 func_id, @@ -503,6 +904,58 @@ nbl_serv_chan_stop_abnormal_sw_queue_req(struct nbl_service_mgt *serv_mgt, return ret; } +static void nbl_serv_chan_stop_abnormal_sw_queue_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt); + struct nbl_chan_param_stop_abnormal_sw_queue *param = + (struct nbl_chan_param_stop_abnormal_sw_queue *)data; + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info; + struct nbl_chan_ack_info chan_ack; + int ret = 0; + + vsi_info = &ring_mgt->vsi_info[NBL_VSI_DATA]; + if (param->local_queue_id < vsi_info->ring_offset || + param->local_queue_id >= + vsi_info->ring_offset + vsi_info->ring_num || + !vsi_info->ring_num) { + ret = -EINVAL; + goto send_ack; + } + + ret = nbl_serv_stop_abnormal_sw_queue(serv_mgt, param->local_queue_id, + param->type); + +send_ack: + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_STOP_ABNORMAL_SW_QUEUE, + msg_id, ret, NULL, 0); + chan_ops->send_ack(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), &chan_ack); +} + +static dma_addr_t +nbl_serv_netdev_queue_restore(struct nbl_service_mgt *serv_mgt, + u16 local_queue_id, int type) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + return disp_ops->restore_abnormal_ring(p, + local_queue_id, type); +} + +static int nbl_serv_netdev_queue_restart(struct nbl_service_mgt *serv_mgt, + u16 local_queue_id, int type) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + + return disp_ops->restart_abnormal_ring(p, + local_queue_id, type); +} + static dma_addr_t nbl_serv_chan_restore_netdev_queue_req(struct nbl_service_mgt *serv_mgt, u16 local_queue_id, u16 func_id, @@ -527,6 +980,38 @@ nbl_serv_chan_restore_netdev_queue_req(struct nbl_service_mgt *serv_mgt, return dma; } +static void nbl_serv_chan_restore_netdev_queue_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt); + struct nbl_chan_param_restore_queue *param = + (struct nbl_chan_param_restore_queue *)data; + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info; + struct nbl_chan_ack_info chan_ack; + dma_addr_t dma = 0; + int ret = NBL_CHAN_RESP_OK; + + vsi_info = &ring_mgt->vsi_info[NBL_VSI_DATA]; + if (param->local_queue_id < vsi_info->ring_offset || + param->local_queue_id >= + vsi_info->ring_offset + vsi_info->ring_num || + !vsi_info->ring_num) { + ret = -EINVAL; + goto send_ack; + } + + dma = nbl_serv_netdev_queue_restore(serv_mgt, param->local_queue_id, + param->type); + +send_ack: + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_RESTORE_NETDEV_QUEUE, + msg_id, ret, &dma, sizeof(dma)); + chan_ops->send_ack(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), &chan_ack); +} + static int nbl_serv_chan_restart_netdev_queue_req(struct nbl_service_mgt *serv_mgt, u16 local_queue_id, u16 func_id, @@ -545,6 +1030,38 @@ nbl_serv_chan_restart_netdev_queue_req(struct nbl_service_mgt *serv_mgt, &chan_send); } +static void nbl_serv_chan_restart_netdev_queue_resp(void *priv, u16 src_id, + u16 msg_id, void *data, + u32 data_len) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_channel_ops *chan_ops = NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt); + struct nbl_chan_param_restart_queue *param = + (struct nbl_chan_param_restart_queue *)data; + struct nbl_serv_ring_mgt *ring_mgt = + NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info; + struct nbl_chan_ack_info chan_ack; + int ret = 0; + + vsi_info = &ring_mgt->vsi_info[NBL_VSI_DATA]; + if (param->local_queue_id < vsi_info->ring_offset || + param->local_queue_id >= + vsi_info->ring_offset + vsi_info->ring_num || + !vsi_info->ring_num) { + ret = -EINVAL; + goto send_ack; + } + + ret = nbl_serv_netdev_queue_restart(serv_mgt, param->local_queue_id, + param->type); + +send_ack: + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_RESTART_NETDEV_QUEUE, + msg_id, ret, NULL, 0); + chan_ops->send_ack(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), &chan_ack); +} + static int nbl_serv_start_abnormal_hw_queue(struct nbl_service_mgt *serv_mgt, u16 vsi_id, u16 local_queue_id, dma_addr_t dma, int type) @@ -636,19 +1153,88 @@ static void nbl_serv_restore_queue(struct nbl_service_mgt *serv_mgt, u16 vsi_id, rtnl_unlock(); } -int nbl_serv_netdev_open(struct net_device *netdev) +static void nbl_serv_handle_tx_timeout(struct work_struct *work) { - struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); - struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_serv_net_resource_mgt *serv_net_resource_mgt = + container_of(work, struct nbl_serv_net_resource_mgt, + tx_timeout); + struct nbl_service_mgt *serv_mgt = serv_net_resource_mgt->serv_mgt; struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); - struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); struct nbl_serv_ring_vsi_info *vsi_info; - int num_cpus, real_qps, ret = 0; + int i = 0; - if (!test_bit(NBL_DOWN, adapter->state)) - return -EBUSY; + vsi_info = &ring_mgt->vsi_info[NBL_VSI_DATA]; - netdev_dbg(netdev, "Nbl open\n"); + for (i = vsi_info->ring_offset; + i < vsi_info->ring_offset + vsi_info->ring_num; i++) { + if (ring_mgt->tx_rings[i].need_recovery) { + nbl_serv_restore_queue(serv_mgt, vsi_info->vsi_id, i, + NBL_TX, false); + ring_mgt->tx_rings[i].need_recovery = false; + } + } +} + +static void nbl_serv_update_link_state(struct work_struct *work) +{ + struct nbl_serv_net_resource_mgt *serv_net_resource_mgt = + container_of(work, struct nbl_serv_net_resource_mgt, + update_link_state); + struct nbl_service_mgt *serv_mgt = serv_net_resource_mgt->serv_mgt; + + nbl_serv_set_link_state(serv_mgt, serv_net_resource_mgt->netdev); +} + +static void nbl_serv_update_vlan(struct work_struct *work) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt = + container_of(work, struct nbl_serv_net_resource_mgt, + update_vlan); + struct nbl_service_mgt *serv_mgt = net_resource_mgt->serv_mgt; + struct net_device *netdev = net_resource_mgt->netdev; + int was_running, err; + u16 vid; + + vid = net_resource_mgt->vlan_tci & VLAN_VID_MASK; + nbl_serv_update_default_vlan(serv_mgt, vid); + + rtnl_lock(); + was_running = netif_running(netdev); + + if (was_running) { + err = nbl_serv_netdev_stop(netdev); + if (err) { + netdev_err(netdev, + "Netdev stop failed while update_vlan\n"); + goto netdev_stop_fail; + } + + err = nbl_serv_netdev_open(netdev); + if (err) { + netdev_err(netdev, + "Netdev open failed after update_vlan\n"); + goto netdev_open_fail; + } + } + +netdev_stop_fail: +netdev_open_fail: + rtnl_unlock(); +} + +int nbl_serv_netdev_open(struct net_device *netdev) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct nbl_serv_ring_vsi_info *vsi_info; + int num_cpus, real_qps, ret = 0; + + if (!test_bit(NBL_DOWN, adapter->state)) + return -EBUSY; + + netdev_dbg(netdev, "Nbl open\n"); netif_carrier_off(netdev); nbl_serv_set_sfp_state(serv_mgt, netdev, NBL_COMMON_TO_ETH_ID(common), @@ -715,6 +1301,104 @@ int nbl_serv_netdev_stop(struct net_device *netdev) return 0; } +static int nbl_serv_change_mtu(struct net_device *netdev, int new_mtu) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(netdev); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + int was_running = 0, err = 0; + int max_mtu; + + max_mtu = disp_ops->get_max_mtu(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); + if (new_mtu > max_mtu) + netdev_notice(netdev, "Netdev already bind xdp prog: new_mtu(%d) > current_max_mtu(%d), try to rebuild rx buffer\n", + new_mtu, max_mtu); + + if (new_mtu) { + netdev->mtu = new_mtu; + was_running = netif_running(netdev); + if (was_running) { + err = nbl_serv_netdev_stop(netdev); + if (err) { + netdev_err(netdev, "Netdev stop failed while change mtu\n"); + return err; + } + + err = nbl_serv_netdev_open(netdev); + if (err) { + netdev_err(netdev, "Netdev open failed after change mtu\n"); + return err; + } + } + } + + disp_ops->set_mtu(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common), new_mtu); + + return 0; +} + +static int nbl_serv_set_mac(struct net_device *dev, void *p) +{ + struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(dev); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_vlan_node *vlan_node; + struct sockaddr *addr = p; + struct nbl_netdev_priv *priv = netdev_priv(dev); + int ret = 0; + + if (!is_valid_ether_addr(addr->sa_data)) { + netdev_err(dev, "Temp to change a invalid mac address %pM\n", + addr->sa_data); + return -EADDRNOTAVAIL; + } + + if (ether_addr_equal(flow_mgt->mac, addr->sa_data)) + return 0; + + list_for_each_entry(vlan_node, &flow_mgt->vlan_list, node) { + if (!vlan_node->primary_mac_effective) + continue; + disp_ops->del_macvlan(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + flow_mgt->mac, vlan_node->vid, + priv->data_vsi); + ret = disp_ops->add_macvlan(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + addr->sa_data, vlan_node->vid, + priv->data_vsi); + if (ret) { + netdev_err(dev, "Fail to cfg macvlan on vid %u", + vlan_node->vid); + goto fail; + } + } + + ether_addr_copy(flow_mgt->mac, addr->sa_data); + eth_hw_addr_set(dev, addr->sa_data); + + if (!NBL_COMMON_TO_VF_CAP(common)) + disp_ops->set_eth_mac_addr(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + addr->sa_data, + NBL_COMMON_TO_ETH_ID(common)); + + return 0; +fail: + list_for_each_entry(vlan_node, &flow_mgt->vlan_list, node) { + if (!vlan_node->primary_mac_effective) + continue; + disp_ops->del_macvlan(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + addr->sa_data, vlan_node->vid, + priv->data_vsi); + disp_ops->add_macvlan(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + flow_mgt->mac, vlan_node->vid, + priv->data_vsi); + } + return -EAGAIN; +} + static int nbl_serv_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid) { struct nbl_adapter *adapter = NBL_NETDEV_TO_ADAPTER(dev); @@ -813,6 +1497,82 @@ static int nbl_serv_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid) return 0; } +static int nbl_serv_update_default_vlan(struct nbl_service_mgt *serv_mgt, + u16 vid) +{ + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_vlan_node *vlan_node = NULL; + struct nbl_serv_vlan_node *node, *tmp; + struct nbl_common_info *common; + bool other_effective = false; + int ret; + u16 vsi; + + if (flow_mgt->vid == vid) + return 0; + + common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + vsi = NBL_COMMON_TO_VSI_ID(common); + rtnl_lock(); + + list_for_each_entry(node, &flow_mgt->vlan_list, node) { + if (node->vid == vid) { + node->ref_cnt++; + vlan_node = node; + break; + } + } + + if (!vlan_node) + vlan_node = nbl_serv_alloc_vlan_node(); + + if (!vlan_node) { + rtnl_unlock(); + return -ENOMEM; + } + + vlan_node->vid = vid; + /* restore to default vlan id 0, we need restore other vlan interface */ + if (!vid) + other_effective = true; + list_for_each_entry_safe(node, tmp, &flow_mgt->vlan_list, node) { + if (node->vid == flow_mgt->vid && node != vlan_node) { + node->ref_cnt--; + if (!node->ref_cnt) { + nbl_serv_update_vlan_node_effective(serv_mgt, + node, + 0, vsi); + list_del(&node->node); + nbl_serv_free_vlan_node(node); + } + } else if (node->vid != vid) { + nbl_serv_update_vlan_node_effective(serv_mgt, node, + other_effective, + vsi); + } + } + + ret = nbl_serv_update_vlan_node_effective(serv_mgt, vlan_node, 1, vsi); + if (ret) + goto free_vlan_node; + + if (vlan_node->ref_cnt == 1) + list_add(&vlan_node->node, &flow_mgt->vlan_list); + + flow_mgt->vid = vid; + rtnl_unlock(); + + return 0; + +free_vlan_node: + vlan_node->ref_cnt--; + if (!vlan_node->ref_cnt) + nbl_serv_free_vlan_node(vlan_node); + rtnl_unlock(); + + return ret; +} + static void nbl_serv_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats) { @@ -844,6 +1604,406 @@ static void nbl_serv_get_stats64(struct net_device *netdev, stats->tx_dropped = 0; } +static int nbl_addr_unsync(struct net_device *netdev, const u8 *addr) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt; + struct nbl_service_mgt *serv_mgt; + struct nbl_adapter *adapter; + + adapter = NBL_NETDEV_TO_ADAPTER(netdev); + serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + net_resource_mgt = NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + + if (ether_addr_equal(addr, netdev->dev_addr)) + return 0; + + if (!nbl_add_filter(&net_resource_mgt->tmp_del_filter_list, addr)) + return -ENOMEM; + + net_resource_mgt->update_submac = 1; + return 0; +} + +static int nbl_addr_sync(struct net_device *netdev, const u8 *addr) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt; + struct nbl_service_mgt *serv_mgt; + struct nbl_adapter *adapter; + + adapter = NBL_NETDEV_TO_ADAPTER(netdev); + serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + net_resource_mgt = NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + + if (ether_addr_equal(addr, netdev->dev_addr)) + return 0; + + if (!nbl_add_filter(&net_resource_mgt->tmp_add_filter_list, addr)) + return -ENOMEM; + + net_resource_mgt->update_submac = 1; + return 0; +} + +static void +nbl_modify_submacs(struct nbl_serv_net_resource_mgt *net_resource_mgt) +{ + struct nbl_service_mgt *serv_mgt = net_resource_mgt->serv_mgt; + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct net_device *netdev = net_resource_mgt->netdev; + struct nbl_netdev_priv *priv = netdev_priv(netdev); + struct nbl_mac_filter *filter, *safe_filter; + + INIT_LIST_HEAD(&net_resource_mgt->tmp_add_filter_list); + INIT_LIST_HEAD(&net_resource_mgt->tmp_del_filter_list); + net_resource_mgt->update_submac = 0; + + netif_addr_lock_bh(netdev); + __dev_uc_sync(net_resource_mgt->netdev, nbl_addr_sync, nbl_addr_unsync); + __dev_mc_sync(net_resource_mgt->netdev, nbl_addr_sync, nbl_addr_unsync); + netif_addr_unlock_bh(netdev); + + if (!net_resource_mgt->update_submac) + return; + + rtnl_lock(); + list_for_each_entry_safe(filter, safe_filter, + &net_resource_mgt->tmp_del_filter_list, list) { + nbl_serv_del_submac_node(serv_mgt, filter->macaddr, + priv->data_vsi); + list_del(&filter->list); + kfree(filter); + } + + list_for_each_entry_safe(filter, safe_filter, + &net_resource_mgt->tmp_add_filter_list, list) { + nbl_serv_add_submac_node(serv_mgt, filter->macaddr, + priv->data_vsi, flow_mgt->promisc); + list_del(&filter->list); + kfree(filter); + } + + nbl_serv_check_flow_table_spec(serv_mgt); + rtnl_unlock(); +} + +static void +nbl_modify_promisc_mode(struct nbl_serv_net_resource_mgt *net_resource_mgt) +{ + struct nbl_service_mgt *serv_mgt = net_resource_mgt->serv_mgt; + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct net_device *netdev = net_resource_mgt->netdev; + struct nbl_netdev_priv *priv = netdev_priv(netdev); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + bool mode = 0, multi = 0; + bool need_flow = 1; + bool unicast_enable, multicast_enable; + + rtnl_lock(); + net_resource_mgt->curr_promiscuout_mode = netdev->flags; + + if (((netdev->flags & (IFF_PROMISC)) || flow_mgt->force_promisc) && + !NBL_COMMON_TO_VF_CAP(NBL_SERV_MGT_TO_COMMON(serv_mgt))) + mode = 1; + + if ((netdev->flags & (IFF_PROMISC | IFF_ALLMULTI)) || + flow_mgt->force_promisc) + multi = 1; + + if (!flow_mgt->trusted_en) + multi = 0; + + unicast_enable = !mode && need_flow; + multicast_enable = !multi && need_flow; + + if ((flow_mgt->promisc & BIT(NBL_PROMISC)) ^ (mode << NBL_PROMISC)) + if (!NBL_COMMON_TO_VF_CAP(NBL_SERV_MGT_TO_COMMON(serv_mgt))) { + disp_ops->set_promisc_mode(p, + priv->data_vsi, mode); + if (mode) + flow_mgt->promisc |= BIT(NBL_PROMISC); + else + flow_mgt->promisc &= ~BIT(NBL_PROMISC); + } + + if ((flow_mgt->promisc & BIT(NBL_ALLMULTI)) ^ (multi << NBL_ALLMULTI)) { + disp_ops->cfg_multi_mcast(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + priv->data_vsi, multi); + if (multi) + flow_mgt->promisc |= BIT(NBL_ALLMULTI); + else + flow_mgt->promisc &= ~BIT(NBL_ALLMULTI); + } + + if (flow_mgt->mcast_flow_en ^ multicast_enable) { + nbl_serv_update_mcast_submac(serv_mgt, multicast_enable, + unicast_enable, priv->data_vsi); + flow_mgt->mcast_flow_en = multicast_enable; + } + + if (flow_mgt->ucast_flow_en ^ unicast_enable) { + nbl_serv_update_promisc_vlan(serv_mgt, unicast_enable, + priv->data_vsi); + flow_mgt->ucast_flow_en = unicast_enable; + } + + if (flow_mgt->trusted_update) { + flow_mgt->trusted_update = 0; + if (flow_mgt->active_submac_list < flow_mgt->submac_list_cnt) + nbl_serv_update_mcast_submac(serv_mgt, + flow_mgt->mcast_flow_en, + flow_mgt->ucast_flow_en, + priv->data_vsi); + } + rtnl_unlock(); +} + +static void nbl_serv_set_rx_mode(struct net_device *dev) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt; + struct nbl_service_mgt *serv_mgt; + struct nbl_adapter *adapter; + + adapter = NBL_NETDEV_TO_ADAPTER(dev); + serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + net_resource_mgt = NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + + nbl_common_queue_work(&net_resource_mgt->rx_mode_async, false); +} + +static void nbl_serv_change_rx_flags(struct net_device *dev, int flag) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt; + struct nbl_service_mgt *serv_mgt; + struct nbl_adapter *adapter; + + adapter = NBL_NETDEV_TO_ADAPTER(dev); + serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + net_resource_mgt = NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + + nbl_common_queue_work(&net_resource_mgt->rx_mode_async, false); +} + +static netdev_features_t nbl_serv_features_check(struct sk_buff *skb, + struct net_device *dev, + netdev_features_t features) +{ + u32 l2_l3_hrd_len = 0, l4_hrd_len = 0, total_hrd_len = 0; + u8 l4_proto = 0; + __be16 protocol, frag_off; + unsigned char *exthdr; + unsigned int offset = 0; + int nexthdr = 0; + int exthdr_num = 0; + int ret; + union { + struct iphdr *v4; + struct ipv6hdr *v6; + unsigned char *hdr; + } ip; + union { + struct tcphdr *tcp; + struct udphdr *udp; + unsigned char *hdr; + } l4; + + /* No point in doing any of this if neither checksum nor GSO are + * being requested for this frame. We can rule out both by just + * checking for CHECKSUM_PARTIAL. + */ + if (skb->ip_summed != CHECKSUM_PARTIAL) + return features; + + /* We cannot support GSO if the MSS is going to be less than + * 256 bytes or bigger than 16383 bytes. If it is then we need + *to drop support for GSO. + */ + if (skb_is_gso(skb) && + (skb_shinfo(skb)->gso_size < NBL_TX_TSO_MSS_MIN || + skb_shinfo(skb)->gso_size > NBL_TX_TSO_MSS_MAX)) + features &= ~NETIF_F_GSO_MASK; + + l2_l3_hrd_len = (u32)(skb_transport_header(skb) - skb->data); + + ip.hdr = skb_network_header(skb); + l4.hdr = skb_transport_header(skb); + protocol = vlan_get_protocol(skb); + + if (protocol == htons(ETH_P_IP)) { + l4_proto = ip.v4->protocol; + } else if (protocol == htons(ETH_P_IPV6)) { + exthdr = ip.hdr + sizeof(*ip.v6); + l4_proto = ip.v6->nexthdr; + if (l4.hdr != exthdr) { + ret = ipv6_skip_exthdr(skb, exthdr - skb->data, + &l4_proto, &frag_off); + if (ret < 0) + goto out_rm_features; + } + + /* IPV6 extension headers + * (1) donot support routing and destination extension headers + * (2) support 2 extension headers mostly + */ + nexthdr = ipv6_find_hdr(skb, &offset, NEXTHDR_ROUTING, NULL, + NULL); + if (nexthdr == NEXTHDR_ROUTING) { + netdev_info(dev, + "skb contain ipv6 routing ext header\n"); + goto out_rm_features; + } + + nexthdr = ipv6_find_hdr(skb, &offset, NEXTHDR_DEST, NULL, NULL); + if (nexthdr == NEXTHDR_DEST) { + netdev_info(dev, + "skb contain ipv6 routing dest header\n"); + goto out_rm_features; + } + + exthdr_num = nbl_serv_ipv6_exthdr_num(skb, exthdr - skb->data, + ip.v6->nexthdr); + if (exthdr_num < 0 || exthdr_num > 2) { + netdev_info(dev, "skb ipv6 exthdr_num:%d\n", + exthdr_num); + goto out_rm_features; + } + } else { + goto out_rm_features; + } + + switch (l4_proto) { + case IPPROTO_TCP: + l4_hrd_len = (l4.tcp->doff) * 4; + break; + case IPPROTO_UDP: + l4_hrd_len = sizeof(struct udphdr); + break; + case IPPROTO_SCTP: + l4_hrd_len = sizeof(struct sctphdr); + break; + default: + goto out_rm_features; + } + + total_hrd_len = l2_l3_hrd_len + l4_hrd_len; + + // TX checksum offload support total header len is [0, 255] + if (total_hrd_len > NBL_TX_CHECKSUM_OFFLOAD_L2L3L4_HDR_LEN_MAX) + goto out_rm_features; + + // TSO support total header len is [42, 128] + if (total_hrd_len < NBL_TX_TSO_L2L3L4_HDR_LEN_MIN || + total_hrd_len > NBL_TX_TSO_L2L3L4_HDR_LEN_MAX) + features &= ~NETIF_F_GSO_MASK; + + if (skb->encapsulation) + goto out_rm_features; + + return features; + +out_rm_features: + return features & ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | + NETIF_F_SCTP_CRC | NETIF_F_GSO_MASK); +} + +static int nbl_serv_config_rxhash(void *priv, bool enable) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = + NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info = + &ring_mgt->vsi_info[NBL_VSI_DATA]; + struct device *dev = NBL_SERV_MGT_TO_DEV(serv_mgt); + u32 rxfh_indir_size = 0; + u32 *indir = NULL; + int i = 0; + + disp_ops->get_rxfh_indir_size(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common), + &rxfh_indir_size); + indir = devm_kcalloc(dev, rxfh_indir_size, sizeof(u32), GFP_KERNEL); + if (!indir) + return -ENOMEM; + if (enable) { + if (ring_mgt->rss_indir_user) { + memcpy(indir, ring_mgt->rss_indir_user, + rxfh_indir_size * sizeof(u32)); + } else { + for (i = 0; i < rxfh_indir_size; i++) + indir[i] = i % vsi_info->active_ring_num; + } + } + disp_ops->set_rxfh_indir(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common), indir, + rxfh_indir_size); + devm_kfree(dev, indir); + return 0; +} + +static int nbl_serv_set_features(struct net_device *netdev, + netdev_features_t features) +{ + struct nbl_netdev_priv *priv = netdev_priv(netdev); + struct nbl_adapter *adapter = NBL_NETDEV_PRIV_TO_ADAPTER(priv); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + netdev_features_t changed = netdev->features ^ features; + bool enable = false; + + if (changed & NETIF_F_RXHASH) { + enable = !!(features & NETIF_F_RXHASH); + nbl_serv_config_rxhash(serv_mgt, enable); + } + + return 0; +} + +static u16 +nbl_serv_select_queue(struct net_device *netdev, struct sk_buff *skb, + struct net_device *sb_dev) +{ + return netdev_pick_tx(netdev, skb, sb_dev); +} + +static void nbl_serv_tx_timeout(struct net_device *netdev, unsigned int txqueue) +{ + struct nbl_netdev_priv *priv = netdev_priv(netdev); + struct nbl_adapter *adapter = NBL_NETDEV_PRIV_TO_ADAPTER(priv); + struct nbl_service_mgt *serv_mgt = NBL_ADAP_TO_SERV_MGT(adapter); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info; + + vsi_info = &ring_mgt->vsi_info[NBL_VSI_DATA]; + + ring_mgt->tx_rings[vsi_info->ring_offset + txqueue].need_recovery = + true; + ring_mgt->tx_rings[vsi_info->ring_offset + txqueue].tx_timeout_count++; + + netif_warn(common, drv, netdev, "TX timeout on queue %d", txqueue); + + nbl_common_queue_work(&net_resource_mgt->tx_timeout, false); +} + +static int nbl_serv_get_phys_port_name(struct net_device *dev, char *name, + size_t len) +{ + struct nbl_common_info *common = NBL_NETDEV_TO_COMMON(dev); + u8 pf_id; + + pf_id = common->eth_id; + if ((NBL_COMMON_TO_ETH_MODE(common) == NBL_TWO_ETHERNET_PORT) && + common->eth_id == 2) + pf_id = 1; + + if (snprintf(name, len, "p%u", pf_id) >= len) + return -EOPNOTSUPP; + return 0; +} + static int nbl_serv_register_net(void *priv, struct nbl_register_net_param *register_param, struct nbl_register_net_result *register_result) @@ -864,6 +2024,361 @@ static int nbl_serv_unregister_net(void *priv) return disp_ops->unregister_net(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); } +static int nbl_serv_setup_txrx_queues(void *priv, u16 vsi_id, u16 queue_num, + u16 net_vector_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_vector *vec; + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + int i, ret = 0; + + /* queue_num include user&kernel queue */ + ret = disp_ops->alloc_txrx_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_id, queue_num); + if (ret) + return -EFAULT; + + /* ring_mgt->tx_ring_number only for kernel use */ + for (i = 0; i < ring_mgt->tx_ring_num; i++) { + ring_mgt->tx_rings[i].local_queue_id = NBL_PAIR_ID_GET_TX(i); + ring_mgt->rx_rings[i].local_queue_id = NBL_PAIR_ID_GET_RX(i); + } + + for (i = 0; i < ring_mgt->rx_ring_num; i++) { + vec = &ring_mgt->vectors[i]; + vec->local_vec_id = i + net_vector_id; + vec->global_vec_id = + disp_ops->get_global_vector(p, + vsi_id, + vec->local_vec_id); + vec->irq_enable_base = (u8 __iomem *) + disp_ops->get_msix_irq_enable_info(p, + vec->global_vec_id, + &vec->irq_data); + + disp_ops->set_vector_info(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vec->irq_enable_base, + vec->irq_data, i, + ring_mgt->net_msix_mask_en); + } + + return 0; +} + +static void nbl_serv_remove_txrx_queues(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->free_txrx_queues(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id); +} + +static int nbl_serv_init_tx_rate(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + u16 func_id; + int ret = 0; + + if (net_resource_mgt->max_tx_rate) { + func_id = disp_ops->get_function_id(p, vsi_id); + ret = disp_ops->set_tx_rate(p, func_id, + net_resource_mgt->max_tx_rate, + 0); + } + + return ret; +} + +static int nbl_serv_setup_q2vsi(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->setup_q2vsi(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_id); +} + +static void nbl_serv_remove_q2vsi(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->remove_q2vsi(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id); +} + +static int nbl_serv_setup_rss(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->setup_rss(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id); +} + +static void nbl_serv_remove_rss(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->remove_rss(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id); +} + +static int nbl_serv_setup_rss_indir(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info = + &ring_mgt->vsi_info[NBL_VSI_DATA]; + struct device *dev = NBL_SERV_MGT_TO_DEV(serv_mgt); + u32 rxfh_indir_size = 0; + int num_cpus = 0, real_qps = 0; + u32 *indir = NULL; + int i = 0; + + disp_ops->get_rxfh_indir_size(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_id, &rxfh_indir_size); + indir = devm_kcalloc(dev, rxfh_indir_size, sizeof(u32), GFP_KERNEL); + if (!indir) + return -ENOMEM; + + num_cpus = num_online_cpus(); + real_qps = num_cpus > vsi_info->ring_num ? vsi_info->ring_num : + num_cpus; + + for (i = 0; i < rxfh_indir_size; i++) + indir[i] = i % real_qps; + + disp_ops->set_rxfh_indir(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id, + indir, rxfh_indir_size); + devm_kfree(dev, indir); + return 0; +} + +static int nbl_serv_alloc_rings(void *priv, struct net_device *netdev, + struct nbl_ring_param *param) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_ring_mgt *ring_mgt; + struct nbl_dispatch_ops *disp_ops; + struct device *dev; + int ret = 0; + + dev = NBL_SERV_MGT_TO_DEV(serv_mgt); + ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + ring_mgt->tx_ring_num = param->tx_ring_num; + ring_mgt->rx_ring_num = param->rx_ring_num; + ring_mgt->tx_desc_num = param->queue_size; + ring_mgt->rx_desc_num = param->queue_size; + + ret = disp_ops->alloc_rings(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), netdev, + param); + if (ret) + goto alloc_rings_fail; + + ret = nbl_serv_set_tx_rings(ring_mgt, netdev, dev); + if (ret) + goto set_tx_fail; + ret = nbl_serv_set_rx_rings(ring_mgt, netdev, dev); + if (ret) + goto set_rx_fail; + + ret = nbl_serv_set_vectors(serv_mgt, netdev, dev); + if (ret) + goto set_vectors_fail; + + return 0; + +set_vectors_fail: + nbl_serv_remove_rx_ring(ring_mgt, dev); +set_rx_fail: + nbl_serv_remove_tx_ring(ring_mgt, dev); +set_tx_fail: + disp_ops->remove_rings(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +alloc_rings_fail: + return ret; +} + +static void nbl_serv_free_rings(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_ring_mgt *ring_mgt; + struct nbl_dispatch_ops *disp_ops; + struct device *dev; + + dev = NBL_SERV_MGT_TO_DEV(serv_mgt); + ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + nbl_serv_remove_vectors(ring_mgt, dev); + nbl_serv_remove_rx_ring(ring_mgt, dev); + nbl_serv_remove_tx_ring(ring_mgt, dev); + + disp_ops->remove_rings(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); +} + +static int nbl_serv_enable_napis(void *priv, u16 vsi_index) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info = + &ring_mgt->vsi_info[vsi_index]; + u16 start = vsi_info->ring_offset, + end = vsi_info->ring_offset + vsi_info->ring_num; + int i; + + for (i = start; i < end; i++) + napi_enable(&ring_mgt->vectors[i].nbl_napi->napi); + + return 0; +} + +static void nbl_serv_disable_napis(void *priv, u16 vsi_index) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info = + &ring_mgt->vsi_info[vsi_index]; + u16 start = vsi_info->ring_offset, + end = vsi_info->ring_offset + vsi_info->ring_num; + int i; + + for (i = start; i < end; i++) + napi_disable(&ring_mgt->vectors[i].nbl_napi->napi); +} + +static void nbl_serv_set_mask_en(void *priv, bool enable) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_ring_mgt *ring_mgt; + + ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + + ring_mgt->net_msix_mask_en = enable; +} + +static int nbl_serv_start_net_flow(void *priv, struct net_device *netdev, + u16 vsi_id, u16 vid, bool trusted) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct nbl_serv_vlan_node *vlan_node; + u8 mac[ETH_ALEN]; + int ret = 0; + + flow_mgt->ucast_flow_en = true; + flow_mgt->mcast_flow_en = true; + /* Clear cfgs, in case this function exited abnormaly last time */ + disp_ops->clear_flow(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id); + disp_ops->set_mtu(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common), netdev->mtu); + + if (!list_empty(&flow_mgt->vlan_list)) + return -ECONNRESET; + + vlan_node = nbl_serv_alloc_vlan_node(); + if (!vlan_node) + goto alloc_fail; + + flow_mgt->vid = vid; + flow_mgt->trusted_en = trusted; + vlan_node->vid = vid; + ether_addr_copy(flow_mgt->mac, netdev->dev_addr); + ret = nbl_serv_update_vlan_node_effective(serv_mgt, vlan_node, 1, + vsi_id); + if (ret) + goto add_macvlan_fail; + + list_add(&vlan_node->node, &flow_mgt->vlan_list); + flow_mgt->vlan_list_cnt++; + + memset(mac, 0xFF, ETH_ALEN); + ret = nbl_serv_add_submac_node(serv_mgt, mac, vsi_id, 0); + if (ret) + goto add_submac_failed; + + return 0; + +add_submac_failed: + nbl_serv_update_vlan_node_effective(serv_mgt, vlan_node, 0, vsi_id); +add_macvlan_fail: + nbl_serv_free_vlan_node(vlan_node); +alloc_fail: + return ret; +} + +static void nbl_serv_stop_net_flow(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_flow_mgt *flow_mgt = + NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + struct net_device *dev = net_resource_mgt->netdev; + struct nbl_netdev_priv *net_priv = netdev_priv(dev); + + nbl_serv_del_all_submacs(serv_mgt, net_priv->data_vsi); + nbl_serv_del_all_vlans(serv_mgt); + + disp_ops->del_multi_rule(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id); + memset(flow_mgt->mac, 0, sizeof(flow_mgt->mac)); +} + +static void nbl_serv_clear_flow(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->clear_flow(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id); +} + +static int nbl_serv_set_promisc_mode(void *priv, u16 vsi_id, u16 mode) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->set_promisc_mode(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_id, mode); +} + +static int nbl_serv_cfg_multi_mcast(void *priv, u16 vsi_id, u16 enable) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->cfg_multi_mcast(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_id, enable); +} + +static int nbl_serv_set_lldp_flow(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->add_lldp_flow(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vsi_id); +} + +static void nbl_serv_remove_lldp_flow(void *priv, u16 vsi_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->del_lldp_flow(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id); +} + static int nbl_serv_start_mgt_flow(void *priv) { struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; @@ -956,6 +2471,176 @@ static int nbl_serv_destroy_chip(void *p) return 0; } +static int nbl_serv_configure_msix_map(void *priv, u16 num_net_msix, + u16 num_others_msix, + bool net_msix_mask_en) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + int ret = 0; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + ret = disp_ops->configure_msix_map(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + num_net_msix, num_others_msix, + net_msix_mask_en); + if (ret) + return -EIO; + + return 0; +} + +static int nbl_serv_destroy_msix_map(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + int ret = 0; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + ret = disp_ops->destroy_msix_map(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt)); + if (ret) + return -EIO; + + return 0; +} + +static int nbl_serv_enable_mailbox_irq(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + int ret = 0; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + ret = disp_ops->enable_mailbox_irq(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vector_id, enable_msix); + if (ret) + return -EIO; + + return 0; +} + +static int nbl_serv_enable_abnormal_irq(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + int ret = 0; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + ret = disp_ops->enable_abnormal_irq(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vector_id, enable_msix); + if (ret) + return -EIO; + + return 0; +} + +static irqreturn_t nbl_serv_clean_rings(int __always_unused irq, void *data) +{ + struct nbl_serv_vector *vector = (struct nbl_serv_vector *)data; + + napi_schedule_irqoff(&vector->nbl_napi->napi); + + return IRQ_HANDLED; +} + +static int nbl_serv_request_net_irq(void *priv, + struct nbl_msix_info_param *msix_info) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct device *dev = NBL_COMMON_TO_DEV(common); + struct nbl_serv_ring *tx_ring, *rx_ring; + struct nbl_serv_vector *vector; + u32 irq_num; + int i, ret = 0; + + for (i = 0; i < ring_mgt->tx_ring_num; i++) { + tx_ring = &ring_mgt->tx_rings[i]; + rx_ring = &ring_mgt->rx_rings[i]; + vector = &ring_mgt->vectors[i]; + vector->tx_ring = tx_ring; + vector->rx_ring = rx_ring; + + irq_num = msix_info->msix_entries[i].vector; + snprintf(vector->name, sizeof(vector->name), + "nbl_txrx%d@pci:%s", i, + pci_name(NBL_COMMON_TO_PDEV(common))); + ret = devm_request_irq(dev, irq_num, nbl_serv_clean_rings, 0, + vector->name, vector); + if (ret) { + nbl_err(common, "TxRx Queue %u req irq with error %d", + i, ret); + goto request_irq_err; + } + if (!cpumask_empty(&vector->cpumask)) + irq_set_affinity_hint(irq_num, &vector->cpumask); + } + + net_resource_mgt->num_net_msix = msix_info->msix_num; + + return 0; + +request_irq_err: + while (--i + 1) { + vector = &ring_mgt->vectors[i]; + + irq_num = msix_info->msix_entries[i].vector; + irq_set_affinity_hint(irq_num, NULL); + devm_free_irq(dev, irq_num, vector); + } + return ret; +} + +static void nbl_serv_free_net_irq(void *priv, + struct nbl_msix_info_param *msix_info) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct device *dev = NBL_COMMON_TO_DEV(common); + struct nbl_serv_vector *vector; + u32 irq_num; + int i; + + for (i = 0; i < ring_mgt->tx_ring_num; i++) { + vector = &ring_mgt->vectors[i]; + + irq_num = msix_info->msix_entries[i].vector; + irq_set_affinity_hint(irq_num, NULL); + devm_free_irq(dev, irq_num, vector); + } +} + +static u16 nbl_serv_get_global_vector(void *priv, u16 local_vec_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_global_vector(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common), + local_vec_id); +} + +static u16 nbl_serv_get_msix_entry_id(void *priv, u16 local_vec_id) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + return disp_ops->get_msix_entry_id(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + NBL_COMMON_TO_VSI_ID(common), + local_vec_id); +} + static u16 nbl_serv_get_vsi_id(void *priv, u16 func_id, u16 type) { struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; @@ -1000,6 +2685,31 @@ static void nbl_serv_set_netdev_ops(void *priv, (void *)net_device_ops; } +static void nbl_serv_rx_mode_async_task(struct work_struct *work) +{ + struct nbl_serv_net_resource_mgt *serv_net_resource_mgt = + container_of(work, struct nbl_serv_net_resource_mgt, + rx_mode_async); + + nbl_modify_submacs(serv_net_resource_mgt); + nbl_modify_promisc_mode(serv_net_resource_mgt); +} + +static void nbl_serv_net_task_service_timer(struct timer_list *t) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt = + container_of(t, struct nbl_serv_net_resource_mgt, serv_timer); + struct nbl_service_mgt *serv_mgt = net_resource_mgt->serv_mgt; + struct nbl_serv_flow_mgt *flow_mgt = NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt); + + mod_timer(&net_resource_mgt->serv_timer, + round_jiffies(net_resource_mgt->serv_timer_period + jiffies)); + if (flow_mgt->pending_async_work) { + nbl_common_queue_work(&net_resource_mgt->rx_mode_async, false); + flow_mgt->pending_async_work = 0; + } +} + static void nbl_serv_setup_flow_mgt(struct nbl_serv_flow_mgt *flow_mgt) { int i = 0; @@ -1009,6 +2719,212 @@ static void nbl_serv_setup_flow_mgt(struct nbl_serv_flow_mgt *flow_mgt) INIT_LIST_HEAD(&flow_mgt->submac_list[i]); } +static void +nbl_serv_register_restore_netdev_queue(struct nbl_service_mgt *serv_mgt) +{ + struct nbl_channel_ops *chan_ops = NBL_SERV_MGT_TO_CHAN_OPS(serv_mgt); + + if (!chan_ops->check_queue_exist(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), + NBL_CHAN_TYPE_MAILBOX)) + return; + + chan_ops->register_msg(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), + NBL_CHAN_MSG_STOP_ABNORMAL_SW_QUEUE, + nbl_serv_chan_stop_abnormal_sw_queue_resp, + serv_mgt); + + chan_ops->register_msg(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), + NBL_CHAN_MSG_RESTORE_NETDEV_QUEUE, + nbl_serv_chan_restore_netdev_queue_resp, + serv_mgt); + + chan_ops->register_msg(NBL_SERV_MGT_TO_CHAN_PRIV(serv_mgt), + NBL_CHAN_MSG_RESTART_NETDEV_QUEUE, + nbl_serv_chan_restart_netdev_queue_resp, + serv_mgt); +} + +static void nbl_serv_set_wake(struct nbl_service_mgt *serv_mgt) +{ + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + u8 eth_id = NBL_COMMON_TO_ETH_ID(common); + + if (!common->is_vf && common->is_ocp) + disp_ops->set_wol(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), eth_id, + common->wol_ena); +} + +static void nbl_serv_remove_net_resource_mgt(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_mgt; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct device *dev; + + net_mgt = NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + dev = NBL_COMMON_TO_DEV(common); + + if (net_mgt) { + nbl_serv_set_wake(serv_mgt); + timer_delete_sync(&net_mgt->serv_timer); + nbl_common_release_task(&net_mgt->rx_mode_async); + nbl_common_release_task(&net_mgt->tx_timeout); + if (common->is_vf) { + nbl_common_release_task(&net_mgt->update_link_state); + nbl_common_release_task(&net_mgt->update_vlan); + } + devm_kfree(dev, net_mgt); + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt) = NULL; + } +} + +static int nbl_serv_hw_init(struct nbl_serv_net_resource_mgt *net_resource_mgt) +{ + struct nbl_service_mgt *serv_mgt = net_resource_mgt->serv_mgt; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + u8 eth_id = NBL_COMMON_TO_ETH_ID(common); + struct nbl_dispatch_ops *disp_ops; + int ret = 0; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + /* disable wol when driver init */ + if (!common->is_vf && common->is_ocp) + ret = disp_ops->set_wol(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + eth_id, false); + + return ret; +} + +static int nbl_serv_init_hw_stats(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_serv_ring_mgt *ring_mgt = NBL_SERV_MGT_TO_RING_MGT(serv_mgt); + struct nbl_serv_ring_vsi_info *vsi_info = + &ring_mgt->vsi_info[NBL_VSI_DATA]; + struct device *dev = NBL_COMMON_TO_DEV(common); + struct nbl_ustore_stats ustore_stats = {0}; + void *p = NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt); + u8 eth_id = NBL_COMMON_TO_ETH_ID(common); + int ret = 0; + + net_resource_mgt->hw_stats.total_uvn_stat_pkt_drop = + devm_kcalloc(dev, vsi_info->ring_num, sizeof(u64), GFP_KERNEL); + if (!net_resource_mgt->hw_stats.total_uvn_stat_pkt_drop) { + ret = -ENOMEM; + goto alloc_total_uvn_stat_pkt_drop_fail; + } + + if (!common->is_vf) { + ret = disp_ops->get_ustore_total_pkt_drop_stats(p, + eth_id, + &ustore_stats); + if (ret) + goto get_ustore_total_pkt_drop_stats_fail; + net_resource_mgt->hw_stats.start_ustore_stats.rx_drop_packets = + ustore_stats.rx_drop_packets; + net_resource_mgt->hw_stats.start_ustore_stats.rx_trun_packets = + ustore_stats.rx_trun_packets; + } + + return 0; + +get_ustore_total_pkt_drop_stats_fail: + devm_kfree(dev, net_resource_mgt->hw_stats.total_uvn_stat_pkt_drop); +alloc_total_uvn_stat_pkt_drop_fail: + return ret; +} + +static int nbl_serv_remove_hw_stats(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct device *dev = NBL_COMMON_TO_DEV(common); + + devm_kfree(dev, net_resource_mgt->hw_stats.total_uvn_stat_pkt_drop); + return 0; +} + +static int nbl_serv_setup_net_resource_mgt(void *priv, + struct net_device *netdev, + u16 vlan_proto, u16 vlan_tci, + u32 rate) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct device *dev = NBL_COMMON_TO_DEV(common); + struct nbl_serv_net_resource_mgt *net_resource_mgt; + unsigned long hw_stats_delay_time = 0; + int size = sizeof(struct nbl_serv_net_resource_mgt); + u32 delay_time; + + net_resource_mgt = devm_kzalloc(dev, size, GFP_KERNEL); + if (!net_resource_mgt) + return -ENOMEM; + + net_resource_mgt->netdev = netdev; + net_resource_mgt->serv_mgt = serv_mgt; + net_resource_mgt->vlan_proto = vlan_proto; + net_resource_mgt->vlan_tci = vlan_tci; + net_resource_mgt->max_tx_rate = rate; + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt) = net_resource_mgt; + + nbl_serv_hw_init(net_resource_mgt); + nbl_serv_register_restore_netdev_queue(serv_mgt); + + net_resource_mgt->hw_stats_period = NBL_HW_STATS_PERIOD_SECONDS * HZ; + get_random_bytes(&delay_time, sizeof(delay_time)); + hw_stats_delay_time = delay_time % net_resource_mgt->hw_stats_period; + timer_setup(&net_resource_mgt->serv_timer, + nbl_serv_net_task_service_timer, 0); + + net_resource_mgt->serv_timer_period = HZ; + nbl_common_alloc_task(&net_resource_mgt->rx_mode_async, + nbl_serv_rx_mode_async_task); + nbl_common_alloc_task(&net_resource_mgt->tx_timeout, + nbl_serv_handle_tx_timeout); + if (common->is_vf) { + nbl_common_alloc_task(&net_resource_mgt->update_link_state, + nbl_serv_update_link_state); + nbl_common_alloc_task(&net_resource_mgt->update_vlan, + nbl_serv_update_vlan); + } + + INIT_LIST_HEAD(&net_resource_mgt->tmp_add_filter_list); + INIT_LIST_HEAD(&net_resource_mgt->tmp_del_filter_list); + net_resource_mgt->get_stats_jiffies = jiffies; + + mod_timer(&net_resource_mgt->serv_timer, + jiffies + net_resource_mgt->serv_timer_period + + hw_stats_delay_time); + + return 0; +} + +static int nbl_serv_enable_adminq_irq(void *priv, u16 vector_id, + bool enable_msix) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops; + int ret; + + disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + ret = disp_ops->enable_adminq_irq(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + vector_id, enable_msix); + if (ret) + return -EIO; + + return 0; +} + static u8 __iomem *nbl_serv_get_hw_addr(void *priv, size_t *size) { struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; @@ -1287,6 +3203,58 @@ static int nbl_serv_process_abnormal_event(void *priv) return 0; } +static int nbl_serv_setup_vf_config(void *priv, int num_vfs, bool is_flush) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct net_device *netdev = NBL_SERV_MGT_TO_NETDEV(serv_mgt); + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + u16 func_id = U16_MAX; + int i, ret = 0; + + net_resource_mgt->num_vfs = num_vfs; + for (i = 0; i < net_resource_mgt->num_vfs; i++) { + func_id = nbl_serv_get_vf_function_id(serv_mgt, i); + if (func_id == U16_MAX) { + netif_err(common, drv, netdev, "vf id %d invalid\n", i); + return -EINVAL; + } + + disp_ops->init_vf_msix_map(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + func_id, !is_flush); + if (ret) + break; + } + return ret; +} + +static void nbl_serv_remove_vf_config(void *priv) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + + nbl_serv_setup_vf_config(priv, net_resource_mgt->num_vfs, true); + net_resource_mgt->num_vfs = 0; +} + +static int nbl_serv_setup_vf_resource(void *priv, int num_vfs) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + + net_resource_mgt->total_vfs = num_vfs; + return 0; +} + +static void nbl_serv_remove_vf_resource(void *priv) +{ + nbl_serv_remove_vf_config(priv); +} + static void nbl_serv_set_hw_status(void *priv, enum nbl_hw_status hw_status) { struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; @@ -1325,6 +3293,15 @@ static struct nbl_service_ops serv_ops = { .init_chip = nbl_serv_init_chip, .destroy_chip = nbl_serv_destroy_chip, + .configure_msix_map = nbl_serv_configure_msix_map, + .destroy_msix_map = nbl_serv_destroy_msix_map, + .enable_mailbox_irq = nbl_serv_enable_mailbox_irq, + .enable_abnormal_irq = nbl_serv_enable_abnormal_irq, + .enable_adminq_irq = nbl_serv_enable_adminq_irq, + .request_net_irq = nbl_serv_request_net_irq, + .free_net_irq = nbl_serv_free_net_irq, + .get_global_vector = nbl_serv_get_global_vector, + .get_msix_entry_id = nbl_serv_get_msix_entry_id, .get_common_irq_num = nbl_serv_get_common_irq_num, .get_ctrl_irq_num = nbl_serv_get_ctrl_irq_num, .get_port_attributes = nbl_serv_get_port_attributes, @@ -1336,9 +3313,30 @@ static struct nbl_service_ops serv_ops = { .register_net = nbl_serv_register_net, .unregister_net = nbl_serv_unregister_net, - + .setup_txrx_queues = nbl_serv_setup_txrx_queues, + .remove_txrx_queues = nbl_serv_remove_txrx_queues, + + .init_tx_rate = nbl_serv_init_tx_rate, + .setup_q2vsi = nbl_serv_setup_q2vsi, + .remove_q2vsi = nbl_serv_remove_q2vsi, + .setup_rss = nbl_serv_setup_rss, + .remove_rss = nbl_serv_remove_rss, + .setup_rss_indir = nbl_serv_setup_rss_indir, .register_vsi_info = nbl_serv_register_vsi_info, + .alloc_rings = nbl_serv_alloc_rings, + .cpu_affinity_init = nbl_serv_cpu_affinity_init, + .free_rings = nbl_serv_free_rings, + .enable_napis = nbl_serv_enable_napis, + .disable_napis = nbl_serv_disable_napis, + .set_mask_en = nbl_serv_set_mask_en, + .start_net_flow = nbl_serv_start_net_flow, + .stop_net_flow = nbl_serv_stop_net_flow, + .clear_flow = nbl_serv_clear_flow, + .set_promisc_mode = nbl_serv_set_promisc_mode, + .cfg_multi_mcast = nbl_serv_cfg_multi_mcast, + .set_lldp_flow = nbl_serv_set_lldp_flow, + .remove_lldp_flow = nbl_serv_remove_lldp_flow, .start_mgt_flow = nbl_serv_start_mgt_flow, .stop_mgt_flow = nbl_serv_stop_mgt_flow, .get_tx_headroom = nbl_serv_get_tx_headroom, @@ -1349,15 +3347,28 @@ static struct nbl_service_ops serv_ops = { /* For netdev ops */ .netdev_open = nbl_serv_netdev_open, .netdev_stop = nbl_serv_netdev_stop, + .change_mtu = nbl_serv_change_mtu, + .set_mac = nbl_serv_set_mac, .rx_add_vid = nbl_serv_rx_add_vid, .rx_kill_vid = nbl_serv_rx_kill_vid, .get_stats64 = nbl_serv_get_stats64, + .set_rx_mode = nbl_serv_set_rx_mode, + .change_rx_flags = nbl_serv_change_rx_flags, + .set_features = nbl_serv_set_features, + .features_check = nbl_serv_features_check, + .get_phys_port_name = nbl_serv_get_phys_port_name, + .tx_timeout = nbl_serv_tx_timeout, + .select_queue = nbl_serv_select_queue, .get_rep_queue_info = nbl_serv_get_rep_queue_info, .set_netdev_ops = nbl_serv_set_netdev_ops, .get_vsi_id = nbl_serv_get_vsi_id, .get_eth_id = nbl_serv_get_eth_id, + .setup_net_resource_mgt = nbl_serv_setup_net_resource_mgt, + .remove_net_resource_mgt = nbl_serv_remove_net_resource_mgt, + .init_hw_stats = nbl_serv_init_hw_stats, + .remove_hw_stats = nbl_serv_remove_hw_stats, .get_hw_addr = nbl_serv_get_hw_addr, @@ -1374,6 +3385,13 @@ static struct nbl_service_ops serv_ops = { .check_fw_heartbeat = nbl_serv_check_fw_heartbeat, .check_fw_reset = nbl_serv_check_fw_reset, + .set_netdev_carrier_state = nbl_serv_set_netdev_carrier_state, + + .setup_vf_config = nbl_serv_setup_vf_config, + .remove_vf_config = nbl_serv_remove_vf_config, + + .setup_vf_resource = nbl_serv_setup_vf_resource, + .remove_vf_resource = nbl_serv_remove_vf_resource, .set_hw_status = nbl_serv_set_hw_status, .get_active_func_bitmaps = nbl_serv_get_active_func_bitmaps, diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h index 2d60be4610a4..29331407fc41 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h @@ -22,5 +22,9 @@ struct nbl_dev_ops_tbl { int nbl_dev_init(void *p, struct nbl_init_param *param); void nbl_dev_remove(void *p); +int nbl_dev_start(void *p, struct nbl_init_param *param); +void nbl_dev_stop(void *p); +int nbl_dev_setup_vf_config(void *p, int num_vfs); +void nbl_dev_remove_vf_config(void *p); #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h index 6cab14b7cdfc..d7490a60bebb 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h @@ -12,6 +12,17 @@ struct nbl_service_ops { int (*init_chip)(void *p); int (*destroy_chip)(void *p); + int (*configure_msix_map)(void *p, u16 num_net_msix, + u16 num_others_msix, bool net_msix_mask_en); + int (*destroy_msix_map)(void *priv); + int (*enable_mailbox_irq)(void *p, u16 vector_id, bool enable_msix); + int (*enable_abnormal_irq)(void *p, u16 vector_id, bool enable_msix); + int (*enable_adminq_irq)(void *p, u16 vector_id, bool enable_msix); + int (*request_net_irq)(void *priv, + struct nbl_msix_info_param *msix_info); + void (*free_net_irq)(void *priv, struct nbl_msix_info_param *msix_info); + u16 (*get_global_vector)(void *priv, u16 local_vec_id); + u16 (*get_msix_entry_id)(void *priv, u16 local_vec_id); void (*get_common_irq_num)(void *priv, struct nbl_common_irq_num *irq_num); void (*get_ctrl_irq_num)(void *priv, struct nbl_ctrl_irq_num *irq_num); @@ -20,15 +31,21 @@ struct nbl_service_ops { int (*get_part_number)(void *priv, char *part_number); int (*get_serial_number)(void *priv, char *serial_number); int (*enable_port)(void *p, bool enable); + void (*set_netdev_carrier_state)(void *p, struct net_device *netdev, + u8 link_state); + int (*vsi_open)(void *priv, struct net_device *netdev, u16 vsi_index, u16 real_qps, bool use_napi); int (*vsi_stop)(void *priv, u16 vsi_index); + int (*netdev_open)(struct net_device *netdev); int (*netdev_stop)(struct net_device *netdev); + int (*change_mtu)(struct net_device *netdev, int new_mtu); void (*get_stats64)(struct net_device *netdev, struct rtnl_link_stats64 *stats); void (*set_rx_mode)(struct net_device *dev); void (*change_rx_flags)(struct net_device *dev, int flag); + int (*set_mac)(struct net_device *dev, void *p); int (*rx_add_vid)(struct net_device *dev, __be16 proto, u16 vid); int (*rx_kill_vid)(struct net_device *dev, __be16 proto, u16 vid); int (*set_features)(struct net_device *dev, netdev_features_t features); @@ -44,13 +61,44 @@ struct nbl_service_ops { struct nbl_register_net_param *register_param, struct nbl_register_net_result *register_result); int (*unregister_net)(void *priv); + int (*setup_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num, + u16 net_vector_id); + void (*remove_txrx_queues)(void *priv, u16 vsi_id); int (*register_vsi_info)(void *priv, struct nbl_vsi_param *vsi_param); + int (*init_tx_rate)(void *priv, u16 vsi_id); + int (*setup_q2vsi)(void *priv, u16 vsi_id); + void (*remove_q2vsi)(void *priv, u16 vsi_id); + int (*setup_rss)(void *priv, u16 vsi_id); + void (*remove_rss)(void *priv, u16 vsi_id); + int (*setup_rss_indir)(void *priv, u16 vsi_id); + + int (*alloc_rings)(void *priv, struct net_device *dev, + struct nbl_ring_param *param); + void (*cpu_affinity_init)(void *priv, u16 rings_num); + void (*free_rings)(void *priv); + int (*enable_napis)(void *priv, u16 vsi_index); + void (*disable_napis)(void *priv, u16 vsi_index); + void (*set_mask_en)(void *priv, bool enable); + int (*start_net_flow)(void *priv, struct net_device *dev, u16 vsi_id, + u16 vid, bool trusted); + void (*stop_net_flow)(void *priv, u16 vsi_id); + void (*clear_flow)(void *priv, u16 vsi_id); + int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode); + int (*cfg_multi_mcast)(void *priv, u16 vsi, u16 enable); + int (*set_lldp_flow)(void *priv, u16 vsi_id); + void (*remove_lldp_flow)(void *priv, u16 vsi_id); int (*start_mgt_flow)(void *priv); void (*stop_mgt_flow)(void *priv); u32 (*get_tx_headroom)(void *priv); + u16 (*get_vsi_id)(void *priv, u16 func_id, u16 type); void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id, u8 *logic_eth_id); + int (*setup_net_resource_mgt)(void *priv, struct net_device *dev, + u16 vlan_proto, u16 vlan_tci, u32 rate); + void (*remove_net_resource_mgt)(void *priv); + int (*init_hw_stats)(void *priv); + int (*remove_hw_stats)(void *priv); void (*set_sfp_state)(void *priv, struct net_device *netdev, u8 eth_id, bool open, bool is_force); int (*get_board_id)(void *priv); @@ -76,7 +124,15 @@ struct nbl_service_ops { bool (*check_fw_reset)(void *priv); bool (*get_product_fix_cap)(void *priv, enum nbl_fix_cap_type cap_type); + + int (*setup_vf_config)(void *priv, int num_vfs, bool is_flush); + void (*remove_vf_config)(void *priv); void (*register_dev_name)(void *priv, u16 vsi_id, char *name); + void (*get_dev_name)(void *priv, u16 vsi_id, char *name); + + int (*setup_vf_resource)(void *priv, int num_vfs); + void (*remove_vf_resource)(void *priv); + void (*set_hw_status)(void *priv, enum nbl_hw_status hw_status); void (*get_active_func_bitmaps)(void *priv, unsigned long *bitmap, int max_func); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index 6aca084d2b36..70e62fa0dd97 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -18,6 +18,19 @@ static struct nbl_product_base_ops nbl_product_base_ops[NBL_PRODUCT_MAX] = { }, }; +int nbl_core_start(struct nbl_adapter *adapter, struct nbl_init_param *param) +{ + int ret = 0; + + ret = nbl_dev_start(adapter, param); + return ret; +} + +void nbl_core_stop(struct nbl_adapter *adapter) +{ + nbl_dev_stop(adapter); +} + static void nbl_core_setup_product_ops(struct nbl_adapter *adapter, struct nbl_init_param *param, @@ -184,8 +197,13 @@ static int nbl_probe(struct pci_dev *pdev, } pci_set_drvdata(pdev, adapter); + err = nbl_core_start(adapter, ¶m); + if (err) + goto core_start_err; dev_dbg(dev, "nbl probe ok!\n"); return 0; +core_start_err: + nbl_core_remove(adapter); adapter_init_err: pci_clear_master(pdev); configure_dma_err: @@ -201,6 +219,8 @@ static void nbl_remove(struct pci_dev *pdev) if (!adapter) return; pci_disable_sriov(pdev); + + nbl_core_stop(adapter); nbl_core_remove(adapter); pci_clear_master(pdev); @@ -209,6 +229,34 @@ static void nbl_remove(struct pci_dev *pdev) dev_dbg(&pdev->dev, "nbl remove OK!\n"); } +static __maybe_unused int nbl_sriov_configure(struct pci_dev *pdev, int num_vfs) +{ + struct nbl_adapter *adapter = pci_get_drvdata(pdev); + int err; + + if (!num_vfs) { + pci_disable_sriov(pdev); + if (!adapter) + return 0; + + nbl_dev_remove_vf_config(adapter); + return 0; + } + + err = nbl_dev_setup_vf_config(adapter, num_vfs); + if (err) { + dev_err(&pdev->dev, "nbl setup vf config failed %d!\n", err); + return err; + } + err = pci_enable_sriov(pdev, num_vfs); + if (err) { + nbl_dev_remove_vf_config(adapter); + dev_err(&pdev->dev, "nbl enable sriov failed %d!\n", err); + return err; + } + return num_vfs; +} + #define NBL_VENDOR_ID (0x1F0F) /* @@ -297,6 +345,7 @@ static struct pci_driver nbl_driver = { .id_table = nbl_id_table, .probe = nbl_probe, .remove = nbl_remove, + .sriov_configure = nbl_sriov_configure, }; static int __init nbl_module_init(void) -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 net-next 15/15] net/nebula-matrix: add st_sysfs and vf name sysfs 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (13 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 14/15] net/nebula-matrix: add Dev start, stop operation illusion.wang @ 2026-01-09 10:01 ` illusion.wang 2026-01-09 18:40 ` Andrew Lunn 2026-01-10 0:20 ` [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs Jakub Kicinski 15 siblings, 1 reply; 19+ messages in thread From: illusion.wang @ 2026-01-09 10:01 UTC (permalink / raw) To: dimon.zhao, illusion.wang, alvin.wang, sam.chen, netdev Cc: andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list Add st_sysfs to support our private nblconfig tool. The VF netdev sysfs was introduced to address issues where PF names are sometimes too long. The new support also resolves dependencies on specific udev versions. Signed-off-by: illusion.wang <illusion.wang@nebula-matrix.com> --- .../net/ethernet/nebula-matrix/nbl/Makefile | 1 + .../net/ethernet/nebula-matrix/nbl/nbl_core.h | 11 + .../nebula-matrix/nbl/nbl_core/nbl_dev.c | 192 +++++++++++- .../nebula-matrix/nbl/nbl_core/nbl_dev.h | 20 ++ .../nebula-matrix/nbl/nbl_core/nbl_service.c | 296 +++++++++++++++++- .../nebula-matrix/nbl/nbl_core/nbl_service.h | 24 ++ .../nebula-matrix/nbl/nbl_core/nbl_sysfs.c | 85 +++++ .../nebula-matrix/nbl/nbl_core/nbl_sysfs.h | 20 ++ .../nbl/nbl_include/nbl_def_dev.h | 2 + .../nbl/nbl_include/nbl_def_service.h | 4 + .../nbl/nbl_include/nbl_include.h | 19 ++ .../net/ethernet/nebula-matrix/nbl/nbl_main.c | 49 +++ 12 files changed, 721 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.c create mode 100644 drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.h diff --git a/drivers/net/ethernet/nebula-matrix/nbl/Makefile b/drivers/net/ethernet/nebula-matrix/nbl/Makefile index 062ff1ffb964..bd7f91c789b5 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/Makefile +++ b/drivers/net/ethernet/nebula-matrix/nbl/Makefile @@ -19,6 +19,7 @@ nbl_core-objs += nbl_common/nbl_common.o \ nbl_hw/nbl_adminq.o \ nbl_core/nbl_dispatch.o \ nbl_core/nbl_service.o \ + nbl_core/nbl_sysfs.o \ nbl_core/nbl_dev.o \ nbl_main.o diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h index 3db1364eefdc..1988c087e22b 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core.h @@ -124,10 +124,21 @@ struct nbl_netdev_priv { s64 last_st_time; }; +#define NBL_ST_MAX_DEVICE_NUM 96 +struct nbl_software_tool_table { + DECLARE_BITMAP(devid, NBL_ST_MAX_DEVICE_NUM); + int major; + dev_t devno; + struct class *cls; +}; + struct nbl_adapter *nbl_core_init(struct pci_dev *pdev, struct nbl_init_param *param); void nbl_core_remove(struct nbl_adapter *adapter); int nbl_core_start(struct nbl_adapter *adapter, struct nbl_init_param *param); void nbl_core_stop(struct nbl_adapter *adapter); +int nbl_st_init(struct nbl_software_tool_table *st_table); +void nbl_st_remove(struct nbl_software_tool_table *st_table); +struct nbl_software_tool_table *nbl_get_st_table(void); #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c index a379a5851523..b94502d31305 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.c @@ -17,6 +17,9 @@ static struct nbl_dev_ops dev_ops; static int nbl_dev_clean_mailbox_schedule(struct nbl_dev_mgt *dev_mgt); static void nbl_dev_clean_adminq_schedule(struct nbl_task_info *task_info); static void nbl_dev_handle_fatal_err(struct nbl_dev_mgt *dev_mgt); +static int nbl_dev_setup_st_dev(struct nbl_adapter *adapter, + struct nbl_init_param *param); +static void nbl_dev_remove_st_dev(struct nbl_adapter *adapter); /* ---------- Basic functions ---------- */ static int nbl_dev_get_port_attributes(struct nbl_dev_mgt *dev_mgt) @@ -2237,6 +2240,66 @@ struct nbl_dev_vsi *nbl_dev_vsi_select(struct nbl_dev_mgt *dev_mgt, return NULL; } +static int nbl_dev_chan_get_st_name_req(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_dev_st_dev *st_dev = NBL_DEV_MGT_TO_ST_DEV(dev_mgt); + struct nbl_chan_send_info chan_send = { 0 }; + + NBL_CHAN_SEND(chan_send, NBL_COMMON_TO_MGT_PF(common), + NBL_CHAN_MSG_GET_ST_NAME, NULL, 0, st_dev->real_st_name, + sizeof(st_dev->real_st_name), 1); + return chan_ops->send_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + &chan_send); +} + +static void nbl_dev_chan_get_st_name_resp(void *priv, u16 src_id, u16 msg_id, + void *data, u32 data_len) +{ + struct nbl_dev_mgt *dev_mgt = (struct nbl_dev_mgt *)priv; + struct nbl_dev_st_dev *st_dev = NBL_DEV_MGT_TO_ST_DEV(dev_mgt); + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct device *dev = NBL_COMMON_TO_DEV(dev_mgt->common); + struct nbl_chan_ack_info chan_ack; + int ret; + + NBL_CHAN_ACK(chan_ack, src_id, NBL_CHAN_MSG_GET_ST_NAME, msg_id, 0, + st_dev->st_name, sizeof(st_dev->st_name)); + ret = chan_ops->send_ack(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), &chan_ack); + if (ret) + dev_err(dev, + "channel send ack failed with ret: %d, msg_type: %d\n", + ret, NBL_CHAN_MSG_GET_ST_NAME); +} + +static void nbl_dev_register_get_st_name_chan_msg(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_dev_st_dev *st_dev = NBL_DEV_MGT_TO_ST_DEV(dev_mgt); + + if (!chan_ops->check_queue_exist(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_TYPE_MAILBOX)) + return; + + chan_ops->register_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_MSG_GET_ST_NAME, + nbl_dev_chan_get_st_name_resp, dev_mgt); + st_dev->resp_msg_registered = true; +} + +static void nbl_dev_unregister_get_st_name_chan_msg(struct nbl_dev_mgt *dev_mgt) +{ + struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt); + struct nbl_dev_st_dev *st_dev = NBL_DEV_MGT_TO_ST_DEV(dev_mgt); + + if (!st_dev->resp_msg_registered) + return; + + chan_ops->unregister_msg(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt), + NBL_CHAN_MSG_GET_ST_NAME); +} + static struct nbl_dev_net_ops netdev_ops[NBL_PRODUCT_MAX] = { { .setup_netdev_ops = nbl_dev_setup_netops_leonis, @@ -2360,6 +2423,70 @@ static void nbl_dev_remove_net_dev(struct nbl_adapter *adapter) *net_dev = NULL; } +static int nbl_dev_setup_st_dev(struct nbl_adapter *adapter, + struct nbl_init_param *param) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + struct device *dev = NBL_ADAP_TO_DEV(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + void *priv = NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt); + struct nbl_dev_st_dev *st_dev; + int ret; + + /* unify restool's chardev for all chips. all pf create chardev */ + if (!serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_RESTOOL_CAP)) + return 0; + + st_dev = devm_kzalloc(dev, sizeof(struct nbl_dev_st_dev), GFP_KERNEL); + if (!st_dev) + return -ENOMEM; + + dev_mgt->st_dev = st_dev; + ret = serv_ops->setup_st(priv, nbl_get_st_table(), st_dev->st_name); + if (ret) { + dev_err(dev, "create resource char dev failed\n"); + goto alloc_chardev_failed; + } + + if (param->caps.has_ctrl) { + nbl_dev_register_get_st_name_chan_msg(dev_mgt); + } else { + ret = nbl_dev_chan_get_st_name_req(dev_mgt); + if (!ret) + serv_ops->register_real_st_name(priv, + st_dev->real_st_name); + else + dev_err(dev, "get real resource char dev failed\n"); + } + + return 0; +alloc_chardev_failed: + devm_kfree(NBL_ADAP_TO_DEV(adapter), st_dev); + dev_mgt->st_dev = NULL; + return -1; +} + +static void nbl_dev_remove_st_dev(struct nbl_adapter *adapter) +{ + struct nbl_dev_mgt *dev_mgt = + (struct nbl_dev_mgt *)NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_st_dev *st_dev = NBL_DEV_MGT_TO_ST_DEV(dev_mgt); + + if (!serv_ops->get_product_fix_cap(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + NBL_RESTOOL_CAP)) + return; + + nbl_dev_unregister_get_st_name_chan_msg(dev_mgt); + serv_ops->remove_st(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + nbl_get_st_table()); + + devm_kfree(NBL_ADAP_TO_DEV(adapter), st_dev); + dev_mgt->st_dev = NULL; +} + static int nbl_dev_setup_dev_mgt(struct nbl_common_info *common, struct nbl_dev_mgt **dev_mgt) { @@ -2437,6 +2564,10 @@ int nbl_dev_init(void *p, struct nbl_init_param *param) if (ret) goto setup_net_dev_fail; + ret = nbl_dev_setup_st_dev(adapter, param); + if (ret) + goto setup_st_dev_fail; + ret = nbl_dev_setup_ops(dev, dev_ops_tbl, adapter); if (ret) goto setup_ops_fail; @@ -2444,6 +2575,8 @@ int nbl_dev_init(void *p, struct nbl_init_param *param) return 0; setup_ops_fail: + nbl_dev_remove_st_dev(adapter); +setup_st_dev_fail: nbl_dev_remove_net_dev(adapter); setup_net_dev_fail: nbl_dev_remove_ctrl_dev(adapter); @@ -2466,6 +2599,8 @@ void nbl_dev_remove(void *p) &NBL_ADAP_TO_DEV_OPS_TBL(adapter); nbl_dev_remove_ops(dev, dev_ops_tbl); + + nbl_dev_remove_st_dev(adapter); nbl_dev_remove_net_dev(adapter); nbl_dev_remove_ctrl_dev(adapter); nbl_dev_remove_common_dev(adapter); @@ -2721,6 +2856,30 @@ int nbl_dev_setup_vf_config(void *p, int num_vfs) num_vfs, false); } +void nbl_dev_register_dev_name(void *p) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_dev_net *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + + /* get pf_name then register it to AF */ + serv_ops->register_dev_name(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + common->vsi_id, net_dev->netdev->name); +} + +void nbl_dev_get_dev_name(void *p, char *dev_name) +{ + struct nbl_adapter *adapter = (struct nbl_adapter *)p; + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_service_ops *serv_ops = NBL_DEV_MGT_TO_SERV_OPS(dev_mgt); + struct nbl_common_info *common = NBL_ADAP_TO_COMMON(adapter); + + serv_ops->get_dev_name(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt), + common->vsi_id, dev_name); +} + void nbl_dev_remove_vf_config(void *p) { struct nbl_adapter *adapter = (struct nbl_adapter *)p; @@ -2748,6 +2907,7 @@ static int nbl_dev_start_net_dev(struct nbl_adapter *adapter, struct nbl_ring_param ring_param = {0}; struct nbl_dev_vsi *vsi; u16 net_vector_id, queue_num; + char dev_name[IFNAMSIZ] = {0}; int ret; vsi = nbl_dev_vsi_select(dev_mgt, NBL_VSI_DATA); @@ -2848,12 +3008,34 @@ static int nbl_dev_start_net_dev(struct nbl_adapter *adapter, if (ret) goto setup_vf_res_fail; } + nbl_netdev_add_st_sysfs(netdev, net_dev); + + } else { + /* vf device need get pf name as its base name */ + nbl_net_add_name_attr(&net_dev->dev_attr.dev_name_attr, + dev_name); +#ifdef CONFIG_PCI_ATS + nbl_dev_get_dev_name(adapter, dev_name); + memcpy(net_dev->dev_attr.dev_name_attr.net_dev_name, dev_name, + IFNAMSIZ); + ret = sysfs_create_file(&netdev->dev.kobj, + &net_dev->dev_attr.dev_name_attr.attr); + if (ret) { + dev_err(dev, "nbl vf device add dev_name:%s net-fs failed", + dev_name); + goto add_vf_sys_attr_fail; + } + dev_dbg(dev, "nbl vf device get dev_name:%s", dev_name); +#endif } set_bit(NBL_DOWN, adapter->state); return 0; setup_vf_res_fail: +#ifdef CONFIG_PCI_ATS +add_vf_sys_attr_fail: +#endif unregister_netdev(netdev); register_netdev_fail: nbl_dev_free_net_irq(dev_mgt); @@ -2884,6 +3066,7 @@ static void nbl_dev_stop_net_dev(struct nbl_adapter *adapter) struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt); struct nbl_dev_vsi *vsi; struct net_device *netdev; + char dev_name[IFNAMSIZ] = { 0 }; if (!net_dev) return; @@ -2894,8 +3077,15 @@ static void nbl_dev_stop_net_dev(struct nbl_adapter *adapter) if (!vsi) return; - if (!common->is_vf) + if (!common->is_vf) { serv_ops->remove_vf_resource(NBL_DEV_MGT_TO_SERV_PRIV(dev_mgt)); + nbl_netdev_remove_st_sysfs(net_dev); + } else { + /* remove vf dev_name attr */ + if (memcmp(net_dev->dev_attr.dev_name_attr.net_dev_name, + dev_name, IFNAMSIZ)) + nbl_net_remove_dev_attr(net_dev); + } serv_ops->change_mtu(netdev, 0); unregister_netdev(netdev); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h index 3b1cf6eea915..91c672ee5993 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_dev.h @@ -8,6 +8,7 @@ #define _NBL_DEV_H_ #include "nbl_core.h" +#include "nbl_sysfs.h" #define NBL_DEV_MGT_TO_COMMON(dev_mgt) ((dev_mgt)->common) #define NBL_DEV_MGT_TO_DEV(dev_mgt) \ @@ -15,6 +16,7 @@ #define NBL_DEV_MGT_TO_COMMON_DEV(dev_mgt) ((dev_mgt)->common_dev) #define NBL_DEV_MGT_TO_CTRL_DEV(dev_mgt) ((dev_mgt)->ctrl_dev) #define NBL_DEV_MGT_TO_NET_DEV(dev_mgt) ((dev_mgt)->net_dev) +#define NBL_DEV_MGT_TO_ST_DEV(dev_mgt) ((dev_mgt)->st_dev) #define NBL_DEV_COMMON_TO_MSIX_INFO(dev_common) (&(dev_common)->msix_info) #define NBL_DEV_CTRL_TO_TASK_INFO(dev_ctrl) (&(dev_ctrl)->task_info) #define NBL_DEV_MGT_TO_NETDEV_OPS(dev_mgt) ((dev_mgt)->net_dev->ops) @@ -177,6 +179,17 @@ struct nbl_dev_net { u16 total_queue_num; u16 kernel_queue_num; u16 total_vfs; + struct nbl_st_name st_name; +}; + +/* Unify res tool. All pf has st char dev. For leonis, only pf0 has adminq, + * so other pf's resoure tool use pf0's char dev actually. + */ +struct nbl_dev_st_dev { + bool resp_msg_registered; + u8 resv[3]; + char st_name[NBL_RESTOOL_NAME_LEN]; + char real_st_name[NBL_RESTOOL_NAME_LEN]; }; struct nbl_dev_mgt { @@ -186,6 +199,7 @@ struct nbl_dev_mgt { struct nbl_dev_common *common_dev; struct nbl_dev_ctrl *ctrl_dev; struct nbl_dev_net *net_dev; + struct nbl_dev_st_dev *st_dev; }; struct nbl_dev_vsi_feature { @@ -247,4 +261,10 @@ struct nbl_dev_board_id_table { struct nbl_dev_vsi *nbl_dev_vsi_select(struct nbl_dev_mgt *dev_mgt, u8 vsi_index); +void nbl_net_add_name_attr(struct nbl_netdev_name_attr *dev_name_attr, + char *rep_name); +void nbl_net_remove_dev_attr(struct nbl_dev_net *net_dev); +int nbl_netdev_add_st_sysfs(struct net_device *netdev, + struct nbl_dev_net *net_dev); +void nbl_netdev_remove_st_sysfs(struct nbl_dev_net *net_dev); #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c index 5118615c0dbe..9418777e5b18 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.c @@ -3159,6 +3159,278 @@ static int nbl_serv_register_vsi_info(void *priv, vsi_param->queue_num); } +static int nbl_serv_st_open(struct inode *inode, struct file *filep) +{ + struct nbl_serv_st_mgt *p = + container_of(inode->i_cdev, struct nbl_serv_st_mgt, cdev); + + filep->private_data = p; + + return 0; +} + +static ssize_t nbl_serv_st_write(struct file *file, const char __user *ubuf, + size_t size, loff_t *ppos) +{ + return 0; +} + +static ssize_t nbl_serv_st_read(struct file *file, char __user *ubuf, + size_t size, loff_t *ppos) +{ + return 0; +} + +static int nbl_serv_st_release(struct inode *inode, struct file *filp) +{ + return 0; +} + +static int nbl_serv_process_passthrough(struct nbl_service_mgt *serv_mgt, + unsigned int cmd, unsigned long arg) +{ + struct nbl_serv_st_mgt *st_mgt = NBL_SERV_MGT_TO_ST_MGT(serv_mgt); + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct net_device *netdev = NBL_SERV_MGT_TO_NETDEV(serv_mgt); + struct nbl_passthrough_fw_cmd *param = NULL, *result = NULL; + int ret = 0; + + if (st_mgt->real_st_name_valid) + return -EOPNOTSUPP; + + param = kzalloc(sizeof(*param), GFP_KERNEL); + if (!param) + goto alloc_param_fail; + + result = kzalloc(sizeof(*result), GFP_KERNEL); + if (!result) + goto alloc_result_fail; + + ret = copy_from_user(param, (void __user *)arg, _IOC_SIZE(cmd)); + if (ret) { + netif_err(common, drv, netdev, "Bad access %d.\n", ret); + return ret; + } + + nbl_debug(common, "Passthough opcode: %d\n", param->opcode); + + ret = disp_ops->passthrough_fw_cmd(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), + param, result); + if (ret) + goto passthrough_fail; + + ret = copy_to_user((void __user *)arg, result, _IOC_SIZE(cmd)); + +passthrough_fail: + kfree(result); +alloc_result_fail: + kfree(param); +alloc_param_fail: + return ret; +} + +static int nbl_serv_process_st_info(struct nbl_service_mgt *serv_mgt, + unsigned int cmd, unsigned long arg) +{ + struct nbl_serv_net_resource_mgt *net_resource_mgt = + NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_serv_st_mgt *st_mgt = NBL_SERV_MGT_TO_ST_MGT(serv_mgt); + struct nbl_st_info_param *param = NULL; + int ret = 0; + + param = kzalloc(sizeof(*param), GFP_KERNEL); + if (!param) + return -ENOMEM; + + strscpy(param->driver_name, NBL_DRIVER_NAME, + sizeof(param->driver_name)); + if (net_resource_mgt->netdev) + strscpy(param->netdev_name[0], net_resource_mgt->netdev->name, + sizeof(param->netdev_name[0])); + + param->bus = common->bus; + param->devid = common->devid; + param->function = common->function; + param->domain = pci_domain_nr(NBL_COMMON_TO_PDEV(common)->bus); + + param->version = IOCTL_ST_INFO_VERSION; + + param->real_chrdev_flag = st_mgt->real_st_name_valid; + if (st_mgt->real_st_name_valid) + memcpy(param->real_chrdev_name, st_mgt->real_st_name, + sizeof(param->real_chrdev_name)); + + ret = copy_to_user((void __user *)arg, param, _IOC_SIZE(cmd)); + + kfree(param); + return ret; +} + +static long nbl_serv_st_unlock_ioctl(struct file *file, unsigned int cmd, + unsigned long arg) +{ + struct nbl_serv_st_mgt *st_mgt = file->private_data; + struct nbl_service_mgt *serv_mgt = + (struct nbl_service_mgt *)st_mgt->serv_mgt; + struct net_device *netdev = NBL_SERV_MGT_TO_NETDEV(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + int ret = 0; + + if (_IOC_TYPE(cmd) != IOCTL_TYPE) { + netif_err(common, drv, netdev, "cmd %u, magic 0x%x/0x%x.\n", + cmd, _IOC_TYPE(cmd), IOCTL_TYPE); + return -ENOTTY; + } + + if (_IOC_DIR(cmd) & _IOC_READ) + ret = !access_ok((void __user *)arg, _IOC_SIZE(cmd)); + else if (_IOC_DIR(cmd) & _IOC_WRITE) + ret = !access_ok((void __user *)arg, _IOC_SIZE(cmd)); + if (ret) { + netif_err(common, drv, netdev, "Bad access.\n"); + return ret; + } + + switch (cmd) { + case IOCTL_PASSTHROUGH: + ret = nbl_serv_process_passthrough(serv_mgt, cmd, arg); + break; + case IOCTL_ST_INFO: + ret = nbl_serv_process_st_info(serv_mgt, cmd, arg); + break; + default: + netif_err(common, drv, netdev, "Unknown cmd %d.\n", cmd); + return -EFAULT; + } + + return ret; +} + +static const struct file_operations st_ops = { + .owner = THIS_MODULE, + .open = nbl_serv_st_open, + .write = nbl_serv_st_write, + .read = nbl_serv_st_read, + .unlocked_ioctl = nbl_serv_st_unlock_ioctl, + .release = nbl_serv_st_release, +}; + +static int nbl_serv_alloc_subdev_id(struct nbl_software_tool_table *st_table) +{ + int subdev_id; + + subdev_id = find_first_zero_bit(st_table->devid, NBL_ST_MAX_DEVICE_NUM); + if (subdev_id == NBL_ST_MAX_DEVICE_NUM) + return -ENOSPC; + set_bit(subdev_id, st_table->devid); + + return subdev_id; +} + +static void nbl_serv_free_subdev_id(struct nbl_software_tool_table *st_table, + int id) +{ + clear_bit(id, st_table->devid); +} + +static void nbl_serv_register_real_st_name(void *priv, char *st_name) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_st_mgt *st_mgt = NBL_SERV_MGT_TO_ST_MGT(serv_mgt); + + st_mgt->real_st_name_valid = true; + memcpy(st_mgt->real_st_name, st_name, NBL_RESTOOL_NAME_LEN); +} + +static int nbl_serv_setup_st(void *priv, void *st_table_param, char *st_name) +{ + struct nbl_software_tool_table *st_table = + (struct nbl_software_tool_table *)st_table_param; + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + struct nbl_serv_st_mgt *st_mgt = NBL_SERV_MGT_TO_ST_MGT(serv_mgt); + struct device *char_device; + char name[NBL_RESTOOL_NAME_LEN] = {0}; + dev_t devid; + int id, subdev_id, ret = 0; + + id = NBL_COMMON_TO_BOARD_ID(common); + + subdev_id = nbl_serv_alloc_subdev_id(st_table); + if (subdev_id < 0) + goto alloc_subdev_id_fail; + + devid = MKDEV(st_table->major, subdev_id); + + if (!NBL_COMMON_TO_PCI_FUNC_ID(common)) + snprintf(name, sizeof(name), "nblst%04x_conf%d", + NBL_COMMON_TO_PDEV(common)->device, id); + else + snprintf(name, sizeof(name), "nblst%04x_conf%d.%d", + NBL_COMMON_TO_PDEV(common)->device, id, + NBL_COMMON_TO_PCI_FUNC_ID(common)); + + st_mgt = devm_kzalloc(NBL_COMMON_TO_DEV(common), sizeof(*st_mgt), + GFP_KERNEL); + if (!st_mgt) + goto malloc_fail; + + st_mgt->serv_mgt = serv_mgt; + + st_mgt->major = MAJOR(devid); + st_mgt->minor = MINOR(devid); + st_mgt->devno = devid; + st_mgt->subdev_id = subdev_id; + + cdev_init(&st_mgt->cdev, &st_ops); + ret = cdev_add(&st_mgt->cdev, devid, 1); + if (ret) + goto cdev_add_fail; + + char_device = + device_create(st_table->cls, NULL, st_mgt->devno, NULL, name); + if (IS_ERR(char_device)) { + ret = -EBUSY; + goto device_create_fail; + } + + memcpy(st_name, name, NBL_RESTOOL_NAME_LEN); + memcpy(st_mgt->st_name, name, NBL_RESTOOL_NAME_LEN); + NBL_SERV_MGT_TO_ST_MGT(serv_mgt) = st_mgt; + return 0; + +device_create_fail: + cdev_del(&st_mgt->cdev); +cdev_add_fail: + devm_kfree(NBL_COMMON_TO_DEV(common), st_mgt); +malloc_fail: + nbl_serv_free_subdev_id(st_table, subdev_id); +alloc_subdev_id_fail: + return ret; +} + +static void nbl_serv_remove_st(void *priv, void *st_table_param) +{ + struct nbl_software_tool_table *st_table = + (struct nbl_software_tool_table *)st_table_param; + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_serv_st_mgt *st_mgt = NBL_SERV_MGT_TO_ST_MGT(serv_mgt); + struct nbl_common_info *common = NBL_SERV_MGT_TO_COMMON(serv_mgt); + + if (!st_mgt) + return; + + device_destroy(st_table->cls, st_mgt->devno); + cdev_del(&st_mgt->cdev); + + nbl_serv_free_subdev_id(st_table, st_mgt->subdev_id); + + NBL_SERV_MGT_TO_ST_MGT(serv_mgt) = NULL; + devm_kfree(NBL_COMMON_TO_DEV(common), st_mgt); +} + static int nbl_serv_get_board_id(void *priv) { struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; @@ -3240,6 +3512,24 @@ static void nbl_serv_remove_vf_config(void *priv) net_resource_mgt->num_vfs = 0; } +static void nbl_serv_register_dev_name(void *priv, u16 vsi_id, char *name) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->register_dev_name(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id, + name); +} + +static void nbl_serv_get_dev_name(void *priv, u16 vsi_id, char *name) +{ + struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; + struct nbl_dispatch_ops *disp_ops = NBL_SERV_MGT_TO_DISP_OPS(serv_mgt); + + disp_ops->get_dev_name(NBL_SERV_MGT_TO_DISP_PRIV(serv_mgt), vsi_id, + name); +} + static int nbl_serv_setup_vf_resource(void *priv, int num_vfs) { struct nbl_service_mgt *serv_mgt = (struct nbl_service_mgt *)priv; @@ -3386,10 +3676,14 @@ static struct nbl_service_ops serv_ops = { .check_fw_heartbeat = nbl_serv_check_fw_heartbeat, .check_fw_reset = nbl_serv_check_fw_reset, .set_netdev_carrier_state = nbl_serv_set_netdev_carrier_state, + .setup_st = nbl_serv_setup_st, + .remove_st = nbl_serv_remove_st, + .register_real_st_name = nbl_serv_register_real_st_name, .setup_vf_config = nbl_serv_setup_vf_config, .remove_vf_config = nbl_serv_remove_vf_config, - + .register_dev_name = nbl_serv_register_dev_name, + .get_dev_name = nbl_serv_get_dev_name, .setup_vf_resource = nbl_serv_setup_vf_resource, .remove_vf_resource = nbl_serv_remove_vf_resource, diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h index 1357a7f7f26f..ba9e9761a062 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_service.h @@ -9,6 +9,8 @@ #include <linux/mm.h> #include <linux/ptr_ring.h> +#include <linux/cdev.h> + #include "nbl_core.h" #define NBL_SERV_MGT_TO_COMMON(serv_mgt) ((serv_mgt)->common) @@ -20,6 +22,7 @@ #define NBL_SERV_MGT_TO_RING_MGT(serv_mgt) (&(serv_mgt)->ring_mgt) #define NBL_SERV_MGT_TO_FLOW_MGT(serv_mgt) (&(serv_mgt)->flow_mgt) #define NBL_SERV_MGT_TO_NET_RES_MGT(serv_mgt) ((serv_mgt)->net_resource_mgt) +#define NBL_SERV_MGT_TO_ST_MGT(serv_mgt) ((serv_mgt)->st_mgt) #define NBL_SERV_MGT_TO_DISP_OPS_TBL(serv_mgt) ((serv_mgt)->disp_ops_tbl) #define NBL_SERV_MGT_TO_DISP_OPS(serv_mgt) \ @@ -191,6 +194,26 @@ struct nbl_serv_net_resource_mgt { int max_tx_rate; }; +#define IOCTL_TYPE 'n' +#define IOCTL_PASSTHROUGH \ + _IOWR(IOCTL_TYPE, 0x01, struct nbl_passthrough_fw_cmd) +#define IOCTL_ST_INFO _IOR(IOCTL_TYPE, 0x02, struct nbl_st_info_param) + +#define IOCTL_ST_INFO_VERSION 0x10 /* 1.0 */ + +struct nbl_serv_st_mgt { + void *serv_mgt; + struct cdev cdev; + int major; + int minor; + dev_t devno; + int subdev_id; + char st_name[NBL_RESTOOL_NAME_LEN]; + char real_st_name[NBL_RESTOOL_NAME_LEN]; + bool real_st_name_valid; + u8 resv[3]; +}; + struct nbl_service_mgt { struct nbl_common_info *common; struct nbl_dispatch_ops_tbl *disp_ops_tbl; @@ -198,6 +221,7 @@ struct nbl_service_mgt { struct nbl_serv_ring_mgt ring_mgt; struct nbl_serv_flow_mgt flow_mgt; struct nbl_serv_net_resource_mgt *net_resource_mgt; + struct nbl_serv_st_mgt *st_mgt; }; struct nbl_serv_notify_vlan_param { diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.c new file mode 100644 index 000000000000..02dc0ecc481e --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.c @@ -0,0 +1,85 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#include "nbl_dev.h" + +#define NBL_SET_RO_ATTR(dev_name_attr, attr_name, attr_show) do { \ + typeof(dev_name_attr) _name_attr = (dev_name_attr); \ + (_name_attr)->attr.name = __stringify(attr_name); \ + (_name_attr)->attr.mode = SYSFS_PREALLOC | \ + VERIFY_OCTAL_PERMISSIONS(0444); \ + (_name_attr)->show = attr_show; \ + (_name_attr)->store = NULL; \ +} while (0) + +static ssize_t net_rep_show(struct device *dev, + struct nbl_netdev_name_attr *attr, char *buf) +{ + return scnprintf(buf, IFNAMSIZ, "%s\n", attr->net_dev_name); +} + +static ssize_t nbl_st_name_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct nbl_sysfs_st_info *st_info = + container_of(attr, struct nbl_sysfs_st_info, kobj_attr); + struct nbl_dev_net *net_dev = st_info->net_dev; + struct nbl_netdev_priv *net_priv = netdev_priv(net_dev->netdev); + struct nbl_adapter *adapter = net_priv->adapter; + struct nbl_dev_mgt *dev_mgt = NBL_ADAP_TO_DEV_MGT(adapter); + struct nbl_dev_st_dev *st_dev = NBL_DEV_MGT_TO_ST_DEV(dev_mgt); + + return snprintf(buf, PAGE_SIZE, "nblst/%s\n", st_dev->st_name); +} + +void nbl_netdev_remove_st_sysfs(struct nbl_dev_net *net_dev) +{ + if (!net_dev->st_name.st_name_kobj) + return; + + sysfs_remove_file(net_dev->st_name.st_name_kobj, + &net_dev->st_name.st_info.kobj_attr.attr); + + kobject_put(net_dev->st_name.st_name_kobj); +} + +int nbl_netdev_add_st_sysfs(struct net_device *netdev, + struct nbl_dev_net *net_dev) +{ + int ret; + + net_dev->st_name.st_name_kobj = + kobject_create_and_add("resource_tool", &netdev->dev.kobj); + if (!net_dev->st_name.st_name_kobj) + return -ENOMEM; + + net_dev->st_name.st_info.net_dev = net_dev; + sysfs_attr_init(&net_dev->st_name.st_info.kobj_attr.attr); + net_dev->st_name.st_info.kobj_attr.attr.name = "st_name"; + net_dev->st_name.st_info.kobj_attr.attr.mode = 0444; + net_dev->st_name.st_info.kobj_attr.show = nbl_st_name_show; + + ret = sysfs_create_file(net_dev->st_name.st_name_kobj, + &net_dev->st_name.st_info.kobj_attr.attr); + + if (ret) + netdev_err(netdev, "Failed to create st_name sysfs file\n"); + + return 0; +} + +void nbl_net_add_name_attr(struct nbl_netdev_name_attr *attr, char *rep_name) +{ + sysfs_attr_init(&attr->attr); + NBL_SET_RO_ATTR(attr, dev_name, net_rep_show); + strscpy(attr->net_dev_name, rep_name, IFNAMSIZ); +} + +void nbl_net_remove_dev_attr(struct nbl_dev_net *net_dev) +{ + sysfs_remove_file(&net_dev->netdev->dev.kobj, + &net_dev->dev_attr.dev_name_attr.attr); +} diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.h new file mode 100644 index 000000000000..34e5d63addf0 --- /dev/null +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_core/nbl_sysfs.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0*/ +/* + * Copyright (c) 2025 Nebula Matrix Limited. + * Author: + */ + +#ifndef _NBL_SYSFS_H_ +#define _NBL_SYSFS_H_ + +struct nbl_sysfs_st_info { + struct nbl_dev_net *net_dev; + struct kobj_attribute kobj_attr; +}; + +struct nbl_st_name { + struct kobject *st_name_kobj; + struct nbl_sysfs_st_info st_info; +}; + +#endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h index 29331407fc41..5a7b4b26bf1b 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_dev.h @@ -27,4 +27,6 @@ void nbl_dev_stop(void *p); int nbl_dev_setup_vf_config(void *p, int num_vfs); void nbl_dev_remove_vf_config(void *p); +void nbl_dev_register_dev_name(void *p); +void nbl_dev_get_dev_name(void *p, char *dev_name); #endif diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h index d7490a60bebb..a908e2f6cb97 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_def_service.h @@ -125,6 +125,10 @@ struct nbl_service_ops { bool (*get_product_fix_cap)(void *priv, enum nbl_fix_cap_type cap_type); + int (*setup_st)(void *priv, void *st_table_param, char *st_name); + void (*remove_st)(void *priv, void *st_table_param); + void (*register_real_st_name)(void *priv, char *st_name); + int (*setup_vf_config)(void *priv, int num_vfs, bool is_flush); void (*remove_vf_config)(void *priv); void (*register_dev_name)(void *priv, u16 vsi_id, char *name); diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h index 38a9d47ab6ca..0c568488bd1a 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_include/nbl_include.h @@ -379,6 +379,25 @@ struct nbl_cmd_vf_num { u16 vf_max_num[NBL_VF_NUM_CMD_LEN]; }; +#define NBL_RESTOOL_NAME_LEN 32 +#define NBL_ST_INFO_NAME_LEN (64) +#define NBL_ST_INFO_NETDEV_MAX (8) +#define NBL_ST_INFO_RESERVED_LEN (344) +struct nbl_st_info_param { + u8 version; + u8 bus; + u8 devid; + u8 function; + u16 domain; + u16 rsv0; + char driver_name[NBL_ST_INFO_NAME_LEN]; + char driver_ver[NBL_ST_INFO_NAME_LEN]; + char netdev_name[NBL_ST_INFO_NETDEV_MAX][NBL_ST_INFO_NAME_LEN]; + char real_chrdev_flag; + char real_chrdev_name[31]; + u8 rsv[NBL_ST_INFO_RESERVED_LEN]; +} __packed; + #define NBL_OPS_CALL(func, para) \ do { \ typeof(func) _func = (func); \ diff --git a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c index 70e62fa0dd97..9749823f5a83 100644 --- a/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c +++ b/drivers/net/ethernet/nebula-matrix/nbl/nbl_main.c @@ -7,6 +7,7 @@ #include <linux/aer.h> #include "nbl_core.h" +static struct nbl_software_tool_table nbl_st_table; static struct nbl_product_base_ops nbl_product_base_ops[NBL_PRODUCT_MAX] = { { .hw_init = nbl_hw_init_leonis, @@ -18,6 +19,11 @@ static struct nbl_product_base_ops nbl_product_base_ops[NBL_PRODUCT_MAX] = { }, }; +static char *nblst_cdevnode(const struct device *dev, umode_t *mode) +{ + return kasprintf(GFP_KERNEL, "nblst/%s", dev_name(dev)); +} + int nbl_core_start(struct nbl_adapter *adapter, struct nbl_init_param *param) { int ret = 0; @@ -134,6 +140,43 @@ void nbl_core_remove(struct nbl_adapter *adapter) devm_kfree(dev, adapter); } +int nbl_st_init(struct nbl_software_tool_table *st_table) +{ + dev_t devid; + int ret = 0; + + ret = alloc_chrdev_region(&devid, 0, NBL_ST_MAX_DEVICE_NUM, "nblst"); + if (ret < 0) + return ret; + + st_table->major = MAJOR(devid); + st_table->devno = devid; + + st_table->cls = class_create("nblst_cls"); + + st_table->cls->devnode = nblst_cdevnode; + if (IS_ERR(st_table->cls)) { + unregister_chrdev(st_table->major, "nblst"); + unregister_chrdev_region(st_table->devno, + NBL_ST_MAX_DEVICE_NUM); + ret = -EBUSY; + } + + return ret; +} + +void nbl_st_remove(struct nbl_software_tool_table *st_table) +{ + class_destroy(st_table->cls); + unregister_chrdev(st_table->major, "nblst"); + unregister_chrdev_region(st_table->devno, NBL_ST_MAX_DEVICE_NUM); +} + +struct nbl_software_tool_table *nbl_get_st_table(void) +{ + return &nbl_st_table; +} + static void nbl_get_func_param(struct pci_dev *pdev, kernel_ulong_t driver_data, struct nbl_init_param *param) { @@ -243,6 +286,8 @@ static __maybe_unused int nbl_sriov_configure(struct pci_dev *pdev, int num_vfs) return 0; } + /* register pf_name to AF first, cuz vf_name depends on pf_anme */ + nbl_dev_register_dev_name(adapter); err = nbl_dev_setup_vf_config(adapter, num_vfs); if (err) { dev_err(&pdev->dev, "nbl setup vf config failed %d!\n", err); @@ -357,6 +402,8 @@ static int __init nbl_module_init(void) pr_err("Failed to create wq, err = %d\n", status); goto wq_create_failed; } + nbl_st_init(nbl_get_st_table()); + status = pci_register_driver(&nbl_driver); if (status) { pr_err("Failed to register PCI driver, err = %d\n", status); @@ -375,6 +422,8 @@ static void __exit nbl_module_exit(void) { pci_unregister_driver(&nbl_driver); + nbl_st_remove(nbl_get_st_table()); + nbl_common_destroy_wq(); pr_info("nbl module unloaded\n"); -- 2.47.3 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v2 net-next 15/15] net/nebula-matrix: add st_sysfs and vf name sysfs 2026-01-09 10:01 ` [PATCH v2 net-next 15/15] net/nebula-matrix: add st_sysfs and vf name sysfs illusion.wang @ 2026-01-09 18:40 ` Andrew Lunn 0 siblings, 0 replies; 19+ messages in thread From: Andrew Lunn @ 2026-01-09 18:40 UTC (permalink / raw) To: illusion.wang Cc: dimon.zhao, alvin.wang, sam.chen, netdev, andrew+netdev, corbet, kuba, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list On Fri, Jan 09, 2026 at 06:01:33PM +0800, illusion.wang wrote: > Add st_sysfs to support our private nblconfig tool. Private tools are unlikely to be accepted. I suggest you drop this patch for the moment. Once you get the rest of the driver merged, we can discuss how to do something acceptable. Andrew ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang ` (14 preceding siblings ...) 2026-01-09 10:01 ` [PATCH v2 net-next 15/15] net/nebula-matrix: add st_sysfs and vf name sysfs illusion.wang @ 2026-01-10 0:20 ` Jakub Kicinski 15 siblings, 0 replies; 19+ messages in thread From: Jakub Kicinski @ 2026-01-10 0:20 UTC (permalink / raw) To: illusion.wang Cc: dimon.zhao, alvin.wang, sam.chen, netdev, andrew+netdev, corbet, linux-doc, lorenzo, pabeni, horms, vadim.fedorenko, lukas.bulwahn, edumazet, open list On Fri, 9 Jan 2026 18:01:18 +0800 illusion.wang wrote: > 61 files changed, 43278 insertions(+) No way anyone can review 45kLoC. Please cut this down to a minimal driver - ~5kLoC + patch 4. ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2026-01-10 0:20 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-01-09 10:01 [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 01/15] net/nebula-matrix: add minimum nbl build framework illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 02/15] net/nebula-matrix: add simple probe/remove illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 03/15] net/nebula-matrix: add HW layer definitions and implementation illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 04/15] net/nebula-matrix: add machine-generated headers and chip definitions illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 05/15] net/nebula-matrix: add channel layer definitions and implementation illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 06/15] net/nebula-matrix: add resource " illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 07/15] net/nebula-matrix: add intr resource " illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 08/15] net/nebula-matrix: add vsi, queue, adminq " illusion.wang 2026-01-09 18:38 ` Andrew Lunn 2026-01-09 10:01 ` [PATCH v2 net-next 09/15] net/nebula-matrix: add flow " illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 10/15] net/nebula-matrix: add txrx " illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 11/15] net/nebula-matrix: add Dispatch layer " illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 12/15] net/nebula-matrix: add Service " illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 13/15] net/nebula-matrix: add Dev init,remove operation illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 14/15] net/nebula-matrix: add Dev start, stop operation illusion.wang 2026-01-09 10:01 ` [PATCH v2 net-next 15/15] net/nebula-matrix: add st_sysfs and vf name sysfs illusion.wang 2026-01-09 18:40 ` Andrew Lunn 2026-01-10 0:20 ` [PATCH v2 net-next 00/15] nbl driver for Nebulamatrix NICs Jakub Kicinski
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox