* [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 13:58 ` Przemek Kitszel
2024-12-18 18:20 ` Andrew Lunn
2024-12-18 10:50 ` [PATCH v1 02/16] net-next/yunsilicon: Enable CMDQ Xin Tian
` (15 subsequent siblings)
16 siblings, 2 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add yunsilicon xsc driver basic framework, including xsc_pci driver
and xsc_eth driver
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/yunsilicon/Kconfig | 26 ++
drivers/net/ethernet/yunsilicon/Makefile | 8 +
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 132 +++++++++
.../net/ethernet/yunsilicon/xsc/net/Kconfig | 16 ++
.../net/ethernet/yunsilicon/xsc/net/Makefile | 9 +
.../net/ethernet/yunsilicon/xsc/pci/Kconfig | 16 ++
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 9 +
.../net/ethernet/yunsilicon/xsc/pci/main.c | 272 ++++++++++++++++++
10 files changed, 490 insertions(+)
create mode 100644 drivers/net/ethernet/yunsilicon/Kconfig
create mode 100644 drivers/net/ethernet/yunsilicon/Makefile
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Makefile
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/main.c
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
index 0baac25db..aa6016597 100644
--- a/drivers/net/ethernet/Kconfig
+++ b/drivers/net/ethernet/Kconfig
@@ -82,6 +82,7 @@ source "drivers/net/ethernet/i825xx/Kconfig"
source "drivers/net/ethernet/ibm/Kconfig"
source "drivers/net/ethernet/intel/Kconfig"
source "drivers/net/ethernet/xscale/Kconfig"
+source "drivers/net/ethernet/yunsilicon/Kconfig"
config JME
tristate "JMicron(R) PCI-Express Gigabit Ethernet support"
diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
index c03203439..c16c34d4b 100644
--- a/drivers/net/ethernet/Makefile
+++ b/drivers/net/ethernet/Makefile
@@ -51,6 +51,7 @@ obj-$(CONFIG_NET_VENDOR_INTEL) += intel/
obj-$(CONFIG_NET_VENDOR_I825XX) += i825xx/
obj-$(CONFIG_NET_VENDOR_MICROSOFT) += microsoft/
obj-$(CONFIG_NET_VENDOR_XSCALE) += xscale/
+obj-$(CONFIG_NET_VENDOR_YUNSILICON) += yunsilicon/
obj-$(CONFIG_JME) += jme.o
obj-$(CONFIG_KORINA) += korina.o
obj-$(CONFIG_LANTIQ_ETOP) += lantiq_etop.o
diff --git a/drivers/net/ethernet/yunsilicon/Kconfig b/drivers/net/ethernet/yunsilicon/Kconfig
new file mode 100644
index 000000000..c766390b4
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/Kconfig
@@ -0,0 +1,26 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+# All rights reserved.
+# Yunsilicon driver configuration
+#
+
+config NET_VENDOR_YUNSILICON
+ bool "Yunsilicon devices"
+ default y
+ depends on PCI || NET
+ depends on ARM64 || X86_64
+ help
+ If you have a network (Ethernet) device belonging to this class,
+ say Y.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Yunsilicon cards. If you say Y, you will be asked
+ for your specific card in the following questions.
+
+if NET_VENDOR_YUNSILICON
+
+source "drivers/net/ethernet/yunsilicon/xsc/net/Kconfig"
+source "drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig"
+
+endif # NET_VENDOR_YUNSILICON
diff --git a/drivers/net/ethernet/yunsilicon/Makefile b/drivers/net/ethernet/yunsilicon/Makefile
new file mode 100644
index 000000000..6fc8259a7
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+# All rights reserved.
+# Makefile for the Yunsilicon device drivers.
+#
+
+# obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/
+obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc/pci/
\ No newline at end of file
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
new file mode 100644
index 000000000..5ed12760e
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -0,0 +1,132 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_CORE_H
+#define XSC_CORE_H
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+
+extern unsigned int xsc_log_level;
+
+#define XSC_PCI_VENDOR_ID 0x1f67
+
+#define XSC_MC_PF_DEV_ID 0x1011
+#define XSC_MC_VF_DEV_ID 0x1012
+#define XSC_MC_PF_DEV_ID_DIAMOND 0x1021
+
+#define XSC_MF_HOST_PF_DEV_ID 0x1051
+#define XSC_MF_HOST_VF_DEV_ID 0x1052
+#define XSC_MF_SOC_PF_DEV_ID 0x1053
+
+#define XSC_MS_PF_DEV_ID 0x1111
+#define XSC_MS_VF_DEV_ID 0x1112
+
+#define XSC_MV_HOST_PF_DEV_ID 0x1151
+#define XSC_MV_HOST_VF_DEV_ID 0x1152
+#define XSC_MV_SOC_PF_DEV_ID 0x1153
+
+enum {
+ XSC_LOG_LEVEL_DBG = 0,
+ XSC_LOG_LEVEL_INFO = 1,
+ XSC_LOG_LEVEL_WARN = 2,
+ XSC_LOG_LEVEL_ERR = 3,
+};
+
+#define xsc_dev_log(condition, level, dev, fmt, ...) \
+do { \
+ if (condition) \
+ dev_printk(level, dev, dev_fmt(fmt), ##__VA_ARGS__); \
+} while (0)
+
+#define xsc_core_dbg(__dev, format, ...) \
+ xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_DBG, KERN_DEBUG, \
+ &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
+ __func__, __LINE__, current->pid, ##__VA_ARGS__)
+
+#define xsc_core_dbg_once(__dev, format, ...) \
+ dev_dbg_once(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
+ __func__, __LINE__, current->pid, \
+ ##__VA_ARGS__)
+
+#define xsc_core_dbg_mask(__dev, mask, format, ...) \
+do { \
+ if ((mask) & xsc_debug_mask) \
+ xsc_core_dbg(__dev, format, ##__VA_ARGS__); \
+} while (0)
+
+#define xsc_core_err(__dev, format, ...) \
+ xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_ERR, KERN_ERR, \
+ &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
+ __func__, __LINE__, current->pid, ##__VA_ARGS__)
+
+#define xsc_core_err_rl(__dev, format, ...) \
+ dev_err_ratelimited(&(__dev)->pdev->dev, \
+ "%s:%d:(pid %d): " format, \
+ __func__, __LINE__, current->pid, \
+ ##__VA_ARGS__)
+
+#define xsc_core_warn(__dev, format, ...) \
+ xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_WARN, KERN_WARNING, \
+ &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
+ __func__, __LINE__, current->pid, ##__VA_ARGS__)
+
+#define xsc_core_info(__dev, format, ...) \
+ xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_INFO, KERN_INFO, \
+ &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
+ __func__, __LINE__, current->pid, ##__VA_ARGS__)
+
+#define xsc_pr_debug(format, ...) \
+do { \
+ if (xsc_log_level <= XSC_LOG_LEVEL_DBG) \
+ pr_debug(format, ##__VA_ARGS__); \
+} while (0)
+
+#define assert(__dev, expr) \
+do { \
+ if (!(expr)) { \
+ dev_err(&(__dev)->pdev->dev, \
+ "Assertion failed! %s, %s, %s, line %d\n", \
+ #expr, __FILE__, __func__, __LINE__); \
+ } \
+} while (0)
+
+enum {
+ XSC_MAX_NAME_LEN = 32,
+};
+
+struct xsc_dev_resource {
+ struct mutex alloc_mutex; /* protect buffer alocation according to numa node */
+};
+
+enum xsc_pci_state {
+ XSC_PCI_STATE_DISABLED,
+ XSC_PCI_STATE_ENABLED,
+};
+
+struct xsc_priv {
+ char name[XSC_MAX_NAME_LEN];
+ struct list_head dev_list;
+ struct list_head ctx_list;
+ spinlock_t ctx_lock; /* protect ctx_list */
+ int numa_node;
+};
+
+struct xsc_core_device {
+ struct pci_dev *pdev;
+ struct device *device;
+ struct xsc_priv priv;
+ struct xsc_dev_resource *dev_res;
+
+ void __iomem *bar;
+ int bar_num;
+
+ struct mutex pci_state_mutex; /* protect pci_state */
+ enum xsc_pci_state pci_state;
+ struct mutex intf_state_mutex; /* protect intf_state */
+ unsigned long intf_state;
+};
+
+#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
new file mode 100644
index 000000000..0d9a4ff8a
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+# All rights reserved.
+# Yunsilicon driver configuration
+#
+
+config YUNSILICON_XSC_ETH
+ tristate "Yunsilicon XSC ethernet driver"
+ default n
+ depends on YUNSILICON_XSC_PCI
+ help
+ This driver provides ethernet support for
+ Yunsilicon XSC devices.
+
+ To compile this driver as a module, choose M here. The module
+ will be called xsc_eth.
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
new file mode 100644
index 000000000..2811433af
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+# All rights reserved.
+
+ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
+
+obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o
+
+xsc_eth-y := main.o
\ No newline at end of file
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
new file mode 100644
index 000000000..2b6d79905
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+# All rights reserved.
+# Yunsilicon PCI configuration
+#
+
+config YUNSILICON_XSC_PCI
+ tristate "Yunsilicon XSC PCI driver"
+ default n
+ select PAGE_POOL
+ help
+ This driver is common for Yunsilicon XSC
+ ethernet and RDMA drivers.
+
+ To compile this driver as a module, choose M here. The module
+ will be called xsc_pci.
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
new file mode 100644
index 000000000..709270df8
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+# All rights reserved.
+
+ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
+
+obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
+
+xsc_pci-y := main.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
new file mode 100644
index 000000000..cbe0bfbd1
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
@@ -0,0 +1,272 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include "common/xsc_core.h"
+
+unsigned int xsc_log_level = XSC_LOG_LEVEL_WARN;
+module_param_named(log_level, xsc_log_level, uint, 0644);
+MODULE_PARM_DESC(log_level,
+ "lowest log level to print: 0=debug, 1=info, 2=warning, 3=error. Default=1");
+EXPORT_SYMBOL(xsc_log_level);
+
+#define XSC_PCI_DRV_DESC "Yunsilicon Xsc PCI driver"
+
+static const struct pci_device_id xsc_pci_id_table[] = {
+ { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) },
+ { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID_DIAMOND) },
+ { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_HOST_PF_DEV_ID) },
+ { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_SOC_PF_DEV_ID) },
+ { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MS_PF_DEV_ID) },
+ { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_HOST_PF_DEV_ID) },
+ { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_SOC_PF_DEV_ID) },
+ { 0 }
+};
+
+static int set_dma_caps(struct pci_dev *pdev)
+{
+ int err;
+
+ err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
+ if (err)
+ err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ else
+ err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64));
+
+ if (!err)
+ dma_set_max_seg_size(&pdev->dev, SZ_2G);
+
+ return err;
+}
+
+static int xsc_pci_enable_device(struct xsc_core_device *xdev)
+{
+ struct pci_dev *pdev = xdev->pdev;
+ int err = 0;
+
+ mutex_lock(&xdev->pci_state_mutex);
+ if (xdev->pci_state == XSC_PCI_STATE_DISABLED) {
+ err = pci_enable_device(pdev);
+ if (!err)
+ xdev->pci_state = XSC_PCI_STATE_ENABLED;
+ }
+ mutex_unlock(&xdev->pci_state_mutex);
+
+ return err;
+}
+
+static void xsc_pci_disable_device(struct xsc_core_device *xdev)
+{
+ struct pci_dev *pdev = xdev->pdev;
+
+ mutex_lock(&xdev->pci_state_mutex);
+ if (xdev->pci_state == XSC_PCI_STATE_ENABLED) {
+ pci_disable_device(pdev);
+ xdev->pci_state = XSC_PCI_STATE_DISABLED;
+ }
+ mutex_unlock(&xdev->pci_state_mutex);
+}
+
+static int xsc_pci_init(struct xsc_core_device *xdev, const struct pci_device_id *id)
+{
+ struct pci_dev *pdev = xdev->pdev;
+ void __iomem *bar_base;
+ int bar_num = 0;
+ int err;
+
+ mutex_init(&xdev->pci_state_mutex);
+ xdev->priv.numa_node = dev_to_node(&pdev->dev);
+
+ err = xsc_pci_enable_device(xdev);
+ if (err) {
+ xsc_core_err(xdev, "failed to enable PCI device: err=%d\n", err);
+ goto err_ret;
+ }
+
+ err = pci_request_region(pdev, bar_num, KBUILD_MODNAME);
+ if (err) {
+ xsc_core_err(xdev, "failed to request %s pci_region=%d: err=%d\n",
+ KBUILD_MODNAME, bar_num, err);
+ goto err_disable;
+ }
+
+ pci_set_master(pdev);
+
+ err = set_dma_caps(pdev);
+ if (err) {
+ xsc_core_err(xdev, "failed to set DMA capabilities mask: err=%d\n", err);
+ goto err_clr_master;
+ }
+
+ bar_base = pci_ioremap_bar(pdev, bar_num);
+ if (!bar_base) {
+ xsc_core_err(xdev, "failed to ioremap %s bar%d\n", KBUILD_MODNAME, bar_num);
+ goto err_clr_master;
+ }
+
+ err = pci_save_state(pdev);
+ if (err) {
+ xsc_core_err(xdev, "pci_save_state failed: err=%d\n", err);
+ goto err_io_unmap;
+ }
+
+ xdev->bar_num = bar_num;
+ xdev->bar = bar_base;
+
+ return 0;
+
+err_io_unmap:
+ pci_iounmap(pdev, bar_base);
+err_clr_master:
+ pci_clear_master(pdev);
+ pci_release_region(pdev, bar_num);
+err_disable:
+ xsc_pci_disable_device(xdev);
+err_ret:
+ return err;
+}
+
+static void xsc_pci_fini(struct xsc_core_device *xdev)
+{
+ struct pci_dev *pdev = xdev->pdev;
+
+ if (xdev->bar)
+ pci_iounmap(pdev, xdev->bar);
+ pci_clear_master(pdev);
+ pci_release_region(pdev, xdev->bar_num);
+ xsc_pci_disable_device(xdev);
+}
+
+static int xsc_priv_init(struct xsc_core_device *xdev)
+{
+ struct xsc_priv *priv = &xdev->priv;
+
+ strscpy(priv->name, dev_name(&xdev->pdev->dev), XSC_MAX_NAME_LEN);
+
+ INIT_LIST_HEAD(&priv->ctx_list);
+ spin_lock_init(&priv->ctx_lock);
+ mutex_init(&xdev->intf_state_mutex);
+
+ return 0;
+}
+
+static int xsc_dev_res_init(struct xsc_core_device *xdev)
+{
+ struct xsc_dev_resource *dev_res;
+
+ dev_res = kvzalloc(sizeof(*dev_res), GFP_KERNEL);
+ if (!dev_res)
+ return -ENOMEM;
+
+ xdev->dev_res = dev_res;
+ mutex_init(&dev_res->alloc_mutex);
+
+ return 0;
+}
+
+static void xsc_dev_res_cleanup(struct xsc_core_device *xdev)
+{
+ kfree(xdev->dev_res);
+}
+
+static int xsc_core_dev_init(struct xsc_core_device *xdev)
+{
+ int err;
+
+ xsc_priv_init(xdev);
+
+ err = xsc_dev_res_init(xdev);
+ if (err) {
+ xsc_core_err(xdev, "xsc dev res init failed %d\n", err);
+ goto out;
+ }
+
+ return 0;
+out:
+ return err;
+}
+
+static void xsc_core_dev_cleanup(struct xsc_core_device *xdev)
+{
+ xsc_dev_res_cleanup(xdev);
+}
+
+static int xsc_pci_probe(struct pci_dev *pci_dev,
+ const struct pci_device_id *id)
+{
+ struct xsc_core_device *xdev;
+ int err;
+
+ xdev = kzalloc(sizeof(*xdev), GFP_KERNEL);
+ if (!xdev)
+ return -ENOMEM;
+
+ xdev->pdev = pci_dev;
+ xdev->device = &pci_dev->dev;
+
+ pci_set_drvdata(pci_dev, xdev);
+ err = xsc_pci_init(xdev, id);
+ if (err) {
+ xsc_core_err(xdev, "xsc_pci_init failed %d\n", err);
+ goto err_unset_pci_drvdata;
+ }
+
+ err = xsc_core_dev_init(xdev);
+ if (err) {
+ xsc_core_err(xdev, "xsc_core_dev_init failed %d\n", err);
+ goto err_pci_fini;
+ }
+
+ return 0;
+err_pci_fini:
+ xsc_pci_fini(xdev);
+err_unset_pci_drvdata:
+ pci_set_drvdata(pci_dev, NULL);
+ kfree(xdev);
+
+ return err;
+}
+
+static void xsc_pci_remove(struct pci_dev *pci_dev)
+{
+ struct xsc_core_device *xdev = pci_get_drvdata(pci_dev);
+
+ xsc_core_dev_cleanup(xdev);
+ xsc_pci_fini(xdev);
+ pci_set_drvdata(pci_dev, NULL);
+ kfree(xdev);
+}
+
+static struct pci_driver xsc_pci_driver = {
+ .name = "xsc-pci",
+ .id_table = xsc_pci_id_table,
+ .probe = xsc_pci_probe,
+ .remove = xsc_pci_remove,
+};
+
+static int __init xsc_init(void)
+{
+ int err;
+
+ err = pci_register_driver(&xsc_pci_driver);
+ if (err) {
+ pr_err("failed to register pci driver\n");
+ goto out;
+ }
+ return 0;
+
+out:
+ return err;
+}
+
+static void __exit xsc_fini(void)
+{
+ pci_unregister_driver(&xsc_pci_driver);
+}
+
+module_init(xsc_init);
+module_exit(xsc_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION(XSC_PCI_DRV_DESC);
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 02/16] net-next/yunsilicon: Enable CMDQ
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
2024-12-18 10:50 ` [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 14:46 ` Przemek Kitszel
2024-12-18 10:50 ` [PATCH v1 03/16] net-next/yunsilicon: Add hardware setup APIs Xin Tian
` (14 subsequent siblings)
16 siblings, 1 reply; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Enable cmd queue to support driver-firmware communication.
Hardware control will be performed through cmdq mostly.
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../yunsilicon/xsc/common/xsc_auto_hw.h | 94 +
.../ethernet/yunsilicon/xsc/common/xsc_cmd.h | 2513 +++++++++++++++++
.../ethernet/yunsilicon/xsc/common/xsc_cmdq.h | 218 ++
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 15 +
.../yunsilicon/xsc/common/xsc_driver.h | 25 +
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +-
.../net/ethernet/yunsilicon/xsc/pci/cmdq.c | 2000 +++++++++++++
.../net/ethernet/yunsilicon/xsc/pci/main.c | 96 +
8 files changed, 4962 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h
new file mode 100644
index 000000000..b7d57dcf9
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_HW_H
+#define XSC_HW_H
+
+//hif_irq_csr_defines.h
+#define HIF_IRQ_TBL2IRQ_TBL_RD_DONE_INT_MSIX_REG_ADDR 0xa1100070
+
+//hif_cpm_csr_defines.h
+#define HIF_CPM_LOCK_GET_REG_ADDR 0xa0000104
+#define HIF_CPM_LOCK_PUT_REG_ADDR 0xa0000108
+#define HIF_CPM_LOCK_AVAIL_REG_ADDR 0xa000010c
+#define HIF_CPM_IDA_DATA_MEM_ADDR 0xa0000800
+#define HIF_CPM_IDA_CMD_REG_ADDR 0xa0000020
+#define HIF_CPM_IDA_ADDR_REG_ADDR 0xa0000080
+#define HIF_CPM_IDA_BUSY_REG_ADDR 0xa0000100
+#define HIF_CPM_IDA_CMD_REG_IDA_IDX_WIDTH 5
+#define HIF_CPM_IDA_CMD_REG_IDA_LEN_WIDTH 4
+#define HIF_CPM_IDA_CMD_REG_IDA_R0W1_WIDTH 1
+#define HIF_CPM_LOCK_GET_REG_LOCK_VLD_SHIFT 5
+#define HIF_CPM_LOCK_GET_REG_LOCK_IDX_MASK 0x1f
+#define HIF_CPM_IDA_ADDR_REG_STRIDE 0x4
+#define HIF_CPM_CHIP_VERSION_H_REG_ADDR 0xa0000010
+
+//mmc_csr_defines.h
+#define MMC_MPT_TBL_MEM_DEPTH 32768
+#define MMC_MTT_TBL_MEM_DEPTH 262144
+#define MMC_MPT_TBL_MEM_WIDTH 256
+#define MMC_MTT_TBL_MEM_WIDTH 64
+#define MMC_MPT_TBL_MEM_ADDR 0xa4100000
+#define MMC_MTT_TBL_MEM_ADDR 0xa4200000
+
+//clsf_dma_csr_defines.h
+#define CLSF_DMA_DMA_UL_BUSY_REG_ADDR 0xa6010048
+#define CLSF_DMA_DMA_DL_DONE_REG_ADDR 0xa60100d0
+#define CLSF_DMA_DMA_DL_SUCCESS_REG_ADDR 0xa60100c0
+#define CLSF_DMA_ERR_CODE_CLR_REG_ADDR 0xa60100d4
+#define CLSF_DMA_DMA_RD_TABLE_ID_REG_DMA_RD_TBL_ID_MASK 0x7f
+#define CLSF_DMA_DMA_RD_TABLE_ID_REG_ADDR 0xa6010020
+#define CLSF_DMA_DMA_RD_ADDR_REG_DMA_RD_BURST_NUM_SHIFT 16
+#define CLSF_DMA_DMA_RD_ADDR_REG_ADDR 0xa6010024
+#define CLSF_DMA_INDRW_RD_START_REG_ADDR 0xa6010028
+
+//hif_tbl_csr_defines.h
+#define HIF_TBL_TBL_DL_BUSY_REG_ADDR 0xa1060030
+#define HIF_TBL_TBL_DL_REQ_REG_TBL_DL_LEN_SHIFT 12
+#define HIF_TBL_TBL_DL_REQ_REG_TBL_DL_HOST_ID_SHIFT 11
+#define HIF_TBL_TBL_DL_REQ_REG_ADDR 0xa1060020
+#define HIF_TBL_TBL_DL_ADDR_L_REG_TBL_DL_ADDR_L_MASK 0xffffffff
+#define HIF_TBL_TBL_DL_ADDR_L_REG_ADDR 0xa1060024
+#define HIF_TBL_TBL_DL_ADDR_H_REG_TBL_DL_ADDR_H_MASK 0xffffffff
+#define HIF_TBL_TBL_DL_ADDR_H_REG_ADDR 0xa1060028
+#define HIF_TBL_TBL_DL_START_REG_ADDR 0xa106002c
+#define HIF_TBL_TBL_UL_REQ_REG_TBL_UL_HOST_ID_SHIFT 11
+#define HIF_TBL_TBL_UL_REQ_REG_ADDR 0xa106007c
+#define HIF_TBL_TBL_UL_ADDR_L_REG_TBL_UL_ADDR_L_MASK 0xffffffff
+#define HIF_TBL_TBL_UL_ADDR_L_REG_ADDR 0xa1060080
+#define HIF_TBL_TBL_UL_ADDR_H_REG_TBL_UL_ADDR_H_MASK 0xffffffff
+#define HIF_TBL_TBL_UL_ADDR_H_REG_ADDR 0xa1060084
+#define HIF_TBL_TBL_UL_START_REG_ADDR 0xa1060088
+#define HIF_TBL_MSG_RDY_REG_ADDR 0xa1060044
+
+//hif_cmdqm_csr_defines.h
+#define HIF_CMDQM_HOST_REQ_PID_MEM_ADDR 0xa1026000
+#define HIF_CMDQM_HOST_REQ_CID_MEM_ADDR 0xa1028000
+#define HIF_CMDQM_HOST_RSP_PID_MEM_ADDR 0xa102e000
+#define HIF_CMDQM_HOST_RSP_CID_MEM_ADDR 0xa1030000
+#define HIF_CMDQM_HOST_REQ_BUF_BASE_H_ADDR_MEM_ADDR 0xa1022000
+#define HIF_CMDQM_HOST_REQ_BUF_BASE_L_ADDR_MEM_ADDR 0xa1024000
+#define HIF_CMDQM_HOST_RSP_BUF_BASE_H_ADDR_MEM_ADDR 0xa102a000
+#define HIF_CMDQM_HOST_RSP_BUF_BASE_L_ADDR_MEM_ADDR 0xa102c000
+#define HIF_CMDQM_VECTOR_ID_MEM_ADDR 0xa1034000
+#define HIF_CMDQM_Q_ELEMENT_SZ_REG_ADDR 0xa1020020
+#define HIF_CMDQM_HOST_Q_DEPTH_REG_ADDR 0xa1020028
+#define HIF_CMDQM_HOST_VF_ERR_STS_MEM_ADDR 0xa1032000
+
+//PSV use
+//hif_irq_csr_defines.h
+#define HIF_IRQ_CONTROL_TBL_MEM_ADDR 0xa1102000
+#define HIF_IRQ_INT_DB_REG_ADDR 0xa11000b4
+#define HIF_IRQ_CFG_VECTOR_TABLE_BUSY_REG_ADDR 0xa1100114
+#define HIF_IRQ_CFG_VECTOR_TABLE_ADDR_REG_ADDR 0xa11000f0
+#define HIF_IRQ_CFG_VECTOR_TABLE_CMD_REG_ADDR 0xa11000ec
+#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_LADDR_REG_ADDR 0xa11000f4
+#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_UADDR_REG_ADDR 0xa11000f8
+#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_DATA_REG_ADDR 0xa11000fc
+#define HIF_IRQ_CFG_VECTOR_TABLE_CTRL_REG_ADDR 0xa1100100
+#define HIF_IRQ_CFG_VECTOR_TABLE_START_REG_ADDR 0xa11000e8
+
+#endif /* XSC_HW_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h
new file mode 100644
index 000000000..f63a5f6c0
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h
@@ -0,0 +1,2513 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_CMD_H
+#define XSC_CMD_H
+
+#define CMDQ_VERSION 0x32
+
+#define MAX_MBOX_OUT_LEN 2048
+
+#define QOS_PRIO_MAX 7
+#define QOS_DSCP_MAX 63
+#define MAC_PORT_DSCP_SHIFT 6
+#define QOS_PCP_MAX 7
+#define DSCP_PCP_UNSET 255
+#define MAC_PORT_PCP_SHIFT 3
+#define XSC_MAX_MAC_NUM 8
+#define XSC_BOARD_SN_LEN 32
+#define MAX_PKT_LEN 9800
+#define XSC_RTT_CFG_QPN_MAX 32
+
+#define XSC_PCIE_LAT_CFG_INTERVAL_MAX 8
+#define XSC_PCIE_LAT_CFG_HISTOGRAM_MAX 9
+#define XSC_PCIE_LAT_EN_DISABLE 0
+#define XSC_PCIE_LAT_EN_ENABLE 1
+#define XSC_PCIE_LAT_PERIOD_MIN 1
+#define XSC_PCIE_LAT_PERIOD_MAX 20
+#define DPU_PORT_WGHT_CFG_MAX 1
+
+enum {
+ XSC_CMD_STAT_OK = 0x0,
+ XSC_CMD_STAT_INT_ERR = 0x1,
+ XSC_CMD_STAT_BAD_OP_ERR = 0x2,
+ XSC_CMD_STAT_BAD_PARAM_ERR = 0x3,
+ XSC_CMD_STAT_BAD_SYS_STATE_ERR = 0x4,
+ XSC_CMD_STAT_BAD_RES_ERR = 0x5,
+ XSC_CMD_STAT_RES_BUSY = 0x6,
+ XSC_CMD_STAT_LIM_ERR = 0x8,
+ XSC_CMD_STAT_BAD_RES_STATE_ERR = 0x9,
+ XSC_CMD_STAT_IX_ERR = 0xa,
+ XSC_CMD_STAT_NO_RES_ERR = 0xf,
+ XSC_CMD_STAT_BAD_INP_LEN_ERR = 0x50,
+ XSC_CMD_STAT_BAD_OUTP_LEN_ERR = 0x51,
+ XSC_CMD_STAT_BAD_QP_STATE_ERR = 0x10,
+ XSC_CMD_STAT_BAD_PKT_ERR = 0x30,
+ XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR = 0x40,
+};
+
+enum {
+ DPU_PORT_WGHT_TARGET_HOST,
+ DPU_PORT_WGHT_TARGET_SOC,
+ DPU_PORT_WGHT_TARGET_NUM,
+};
+
+enum {
+ DPU_PRIO_WGHT_TARGET_HOST2SOC,
+ DPU_PRIO_WGHT_TARGET_SOC2HOST,
+ DPU_PRIO_WGHT_TARGET_HOSTSOC2LAG,
+ DPU_PRIO_WGHT_TARGET_NUM,
+};
+
+#define XSC_AP_FEAT_UDP_SPORT_MIN 1024
+#define XSC_AP_FEAT_UDP_SPORT_MAX 65535
+
+enum {
+ XSC_CMD_OP_QUERY_HCA_CAP = 0x100,
+ XSC_CMD_OP_QUERY_ADAPTER = 0x101,
+ XSC_CMD_OP_INIT_HCA = 0x102,
+ XSC_CMD_OP_TEARDOWN_HCA = 0x103,
+ XSC_CMD_OP_ENABLE_HCA = 0x104,
+ XSC_CMD_OP_DISABLE_HCA = 0x105,
+ XSC_CMD_OP_MODIFY_HCA = 0x106,
+ XSC_CMD_OP_QUERY_PAGES = 0x107,
+ XSC_CMD_OP_MANAGE_PAGES = 0x108,
+ XSC_CMD_OP_SET_HCA_CAP = 0x109,
+ XSC_CMD_OP_QUERY_CMDQ_VERSION = 0x10a,
+ XSC_CMD_OP_QUERY_MSIX_TBL_INFO = 0x10b,
+ XSC_CMD_OP_FUNCTION_RESET = 0x10c,
+ XSC_CMD_OP_DUMMY = 0x10d,
+ XSC_CMD_OP_SET_DEBUG_INFO = 0x10e,
+ XSC_CMD_OP_QUERY_PSV_FUNCID = 0x10f,
+ XSC_CMD_OP_ALLOC_IA_LOCK = 0x110,
+ XSC_CMD_OP_RELEASE_IA_LOCK = 0x111,
+ XSC_CMD_OP_ENABLE_RELAXED_ORDER = 0x112,
+ XSC_CMD_OP_QUERY_GUID = 0x113,
+ XSC_CMD_OP_ACTIVATE_HW_CONFIG = 0x114,
+
+ XSC_CMD_OP_CREATE_MKEY = 0x200,
+ XSC_CMD_OP_QUERY_MKEY = 0x201,
+ XSC_CMD_OP_DESTROY_MKEY = 0x202,
+ XSC_CMD_OP_QUERY_SPECIAL_CONTEXTS = 0x203,
+ XSC_CMD_OP_REG_MR = 0x204,
+ XSC_CMD_OP_DEREG_MR = 0x205,
+ XSC_CMD_OP_SET_MPT = 0x206,
+ XSC_CMD_OP_SET_MTT = 0x207,
+
+ XSC_CMD_OP_CREATE_EQ = 0x301,
+ XSC_CMD_OP_DESTROY_EQ = 0x302,
+ XSC_CMD_OP_QUERY_EQ = 0x303,
+
+ XSC_CMD_OP_CREATE_CQ = 0x400,
+ XSC_CMD_OP_DESTROY_CQ = 0x401,
+ XSC_CMD_OP_QUERY_CQ = 0x402,
+ XSC_CMD_OP_MODIFY_CQ = 0x403,
+ XSC_CMD_OP_ALLOC_MULTI_VIRTQ_CQ = 0x404,
+ XSC_CMD_OP_RELEASE_MULTI_VIRTQ_CQ = 0x405,
+
+ XSC_CMD_OP_CREATE_QP = 0x500,
+ XSC_CMD_OP_DESTROY_QP = 0x501,
+ XSC_CMD_OP_RST2INIT_QP = 0x502,
+ XSC_CMD_OP_INIT2RTR_QP = 0x503,
+ XSC_CMD_OP_RTR2RTS_QP = 0x504,
+ XSC_CMD_OP_RTS2RTS_QP = 0x505,
+ XSC_CMD_OP_SQERR2RTS_QP = 0x506,
+ XSC_CMD_OP_2ERR_QP = 0x507,
+ XSC_CMD_OP_RTS2SQD_QP = 0x508,
+ XSC_CMD_OP_SQD2RTS_QP = 0x509,
+ XSC_CMD_OP_2RST_QP = 0x50a,
+ XSC_CMD_OP_QUERY_QP = 0x50b,
+ XSC_CMD_OP_CONF_SQP = 0x50c,
+ XSC_CMD_OP_MAD_IFC = 0x50d,
+ XSC_CMD_OP_INIT2INIT_QP = 0x50e,
+ XSC_CMD_OP_SUSPEND_QP = 0x50f,
+ XSC_CMD_OP_UNSUSPEND_QP = 0x510,
+ XSC_CMD_OP_SQD2SQD_QP = 0x511,
+ XSC_CMD_OP_ALLOC_QP_COUNTER_SET = 0x512,
+ XSC_CMD_OP_DEALLOC_QP_COUNTER_SET = 0x513,
+ XSC_CMD_OP_QUERY_QP_COUNTER_SET = 0x514,
+ XSC_CMD_OP_CREATE_MULTI_QP = 0x515,
+ XSC_CMD_OP_ALLOC_MULTI_VIRTQ = 0x516,
+ XSC_CMD_OP_RELEASE_MULTI_VIRTQ = 0x517,
+ XSC_CMD_OP_QUERY_QP_FLUSH_STATUS = 0x518,
+
+ XSC_CMD_OP_CREATE_PSV = 0x600,
+ XSC_CMD_OP_DESTROY_PSV = 0x601,
+ XSC_CMD_OP_QUERY_PSV = 0x602,
+ XSC_CMD_OP_QUERY_SIG_RULE_TABLE = 0x603,
+ XSC_CMD_OP_QUERY_BLOCK_SIZE_TABLE = 0x604,
+
+ XSC_CMD_OP_CREATE_SRQ = 0x700,
+ XSC_CMD_OP_DESTROY_SRQ = 0x701,
+ XSC_CMD_OP_QUERY_SRQ = 0x702,
+ XSC_CMD_OP_ARM_RQ = 0x703,
+ XSC_CMD_OP_RESIZE_SRQ = 0x704,
+
+ XSC_CMD_OP_ALLOC_PD = 0x800,
+ XSC_CMD_OP_DEALLOC_PD = 0x801,
+ XSC_CMD_OP_ALLOC_UAR = 0x802,
+ XSC_CMD_OP_DEALLOC_UAR = 0x803,
+
+ XSC_CMD_OP_ATTACH_TO_MCG = 0x806,
+ XSC_CMD_OP_DETACH_FROM_MCG = 0x807,
+
+ XSC_CMD_OP_ALLOC_XRCD = 0x80e,
+ XSC_CMD_OP_DEALLOC_XRCD = 0x80f,
+
+ XSC_CMD_OP_ACCESS_REG = 0x805,
+
+ XSC_CMD_OP_MODIFY_RAW_QP = 0x81f,
+
+ XSC_CMD_OP_ENABLE_NIC_HCA = 0x810,
+ XSC_CMD_OP_DISABLE_NIC_HCA = 0x811,
+ XSC_CMD_OP_MODIFY_NIC_HCA = 0x812,
+
+ XSC_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x820,
+ XSC_CMD_OP_MODIFY_NIC_VPORT_CONTEXT = 0x821,
+ XSC_CMD_OP_QUERY_VPORT_STATE = 0x822,
+ XSC_CMD_OP_MODIFY_VPORT_STATE = 0x823,
+ XSC_CMD_OP_QUERY_HCA_VPORT_CONTEXT = 0x824,
+ XSC_CMD_OP_MODIFY_HCA_VPORT_CONTEXT = 0x825,
+ XSC_CMD_OP_QUERY_HCA_VPORT_GID = 0x826,
+ XSC_CMD_OP_QUERY_HCA_VPORT_PKEY = 0x827,
+ XSC_CMD_OP_QUERY_VPORT_COUNTER = 0x828,
+ XSC_CMD_OP_QUERY_PRIO_STATS = 0x829,
+ XSC_CMD_OP_QUERY_PHYPORT_STATE = 0x830,
+ XSC_CMD_OP_QUERY_EVENT_TYPE = 0x831,
+ XSC_CMD_OP_QUERY_LINK_INFO = 0x832,
+ XSC_CMD_OP_QUERY_PFC_PRIO_STATS = 0x833,
+ XSC_CMD_OP_MODIFY_LINK_INFO = 0x834,
+ XSC_CMD_OP_QUERY_FEC_PARAM = 0x835,
+ XSC_CMD_OP_MODIFY_FEC_PARAM = 0x836,
+
+ XSC_CMD_OP_LAG_CREATE = 0x840,
+ XSC_CMD_OP_LAG_ADD_MEMBER = 0x841,
+ XSC_CMD_OP_LAG_REMOVE_MEMBER = 0x842,
+ XSC_CMD_OP_LAG_UPDATE_MEMBER_STATUS = 0x843,
+ XSC_CMD_OP_LAG_UPDATE_HASH_TYPE = 0x844,
+ XSC_CMD_OP_LAG_DESTROY = 0x845,
+
+ XSC_CMD_OP_LAG_SET_QOS = 0x848,
+ XSC_CMD_OP_ENABLE_MSIX = 0x850,
+
+ XSC_CMD_OP_IOCTL_FLOW = 0x900,
+ XSC_CMD_OP_IOCTL_OTHER = 0x901,
+
+ XSC_CMD_OP_IOCTL_SET_DSCP_PMT = 0x1000,
+ XSC_CMD_OP_IOCTL_GET_DSCP_PMT = 0x1001,
+ XSC_CMD_OP_IOCTL_SET_TRUST_MODE = 0x1002,
+ XSC_CMD_OP_IOCTL_GET_TRUST_MODE = 0x1003,
+ XSC_CMD_OP_IOCTL_SET_PCP_PMT = 0x1004,
+ XSC_CMD_OP_IOCTL_GET_PCP_PMT = 0x1005,
+ XSC_CMD_OP_IOCTL_SET_DEFAULT_PRI = 0x1006,
+ XSC_CMD_OP_IOCTL_GET_DEFAULT_PRI = 0x1007,
+ XSC_CMD_OP_IOCTL_SET_PFC = 0x1008,
+ XSC_CMD_OP_IOCTL_GET_PFC = 0x1009,
+ XSC_CMD_OP_IOCTL_SET_RATE_LIMIT = 0x100a,
+ XSC_CMD_OP_IOCTL_GET_RATE_LIMIT = 0x100b,
+ XSC_CMD_OP_IOCTL_SET_SP = 0x100c,
+ XSC_CMD_OP_IOCTL_GET_SP = 0x100d,
+ XSC_CMD_OP_IOCTL_SET_WEIGHT = 0x100e,
+ XSC_CMD_OP_IOCTL_GET_WEIGHT = 0x100f,
+ XSC_CMD_OP_IOCTL_DPU_SET_PORT_WEIGHT = 0x1010,
+ XSC_CMD_OP_IOCTL_DPU_GET_PORT_WEIGHT = 0x1011,
+ XSC_CMD_OP_IOCTL_DPU_SET_PRIO_WEIGHT = 0x1012,
+ XSC_CMD_OP_IOCTL_DPU_GET_PRIO_WEIGHT = 0x1013,
+ XSC_CMD_OP_IOCTL_SET_WATCHDOG_EN = 0x1014,
+ XSC_CMD_OP_IOCTL_GET_WATCHDOG_EN = 0x1015,
+ XSC_CMD_OP_IOCTL_SET_WATCHDOG_PERIOD = 0x1016,
+ XSC_CMD_OP_IOCTL_GET_WATCHDOG_PERIOD = 0x1017,
+ XSC_CMD_OP_IOCTL_SET_PFC_DROP_TH = 0x1018,
+ XSC_CMD_OP_IOCTL_GET_PFC_CFG_STATUS = 0x1019,
+
+ XSC_CMD_OP_IOCTL_SET_ENABLE_RP = 0x1030,
+ XSC_CMD_OP_IOCTL_SET_ENABLE_NP = 0x1031,
+ XSC_CMD_OP_IOCTL_SET_INIT_ALPHA = 0x1032,
+ XSC_CMD_OP_IOCTL_SET_G = 0x1033,
+ XSC_CMD_OP_IOCTL_SET_AI = 0x1034,
+ XSC_CMD_OP_IOCTL_SET_HAI = 0x1035,
+ XSC_CMD_OP_IOCTL_SET_TH = 0x1036,
+ XSC_CMD_OP_IOCTL_SET_BC_TH = 0x1037,
+ XSC_CMD_OP_IOCTL_SET_CNP_OPCODE = 0x1038,
+ XSC_CMD_OP_IOCTL_SET_CNP_BTH_B = 0x1039,
+ XSC_CMD_OP_IOCTL_SET_CNP_BTH_F = 0x103a,
+ XSC_CMD_OP_IOCTL_SET_CNP_ECN = 0x103b,
+ XSC_CMD_OP_IOCTL_SET_DATA_ECN = 0x103c,
+ XSC_CMD_OP_IOCTL_SET_CNP_TX_INTERVAL = 0x103d,
+ XSC_CMD_OP_IOCTL_SET_EVT_PERIOD_RSTTIME = 0x103e,
+ XSC_CMD_OP_IOCTL_SET_CNP_DSCP = 0x103f,
+ XSC_CMD_OP_IOCTL_SET_CNP_PCP = 0x1040,
+ XSC_CMD_OP_IOCTL_SET_EVT_PERIOD_ALPHA = 0x1041,
+ XSC_CMD_OP_IOCTL_GET_CC_CFG = 0x1042,
+ XSC_CMD_OP_IOCTL_GET_CC_STAT = 0x104b,
+ XSC_CMD_OP_IOCTL_SET_CLAMP_TGT_RATE = 0x1052,
+ XSC_CMD_OP_IOCTL_SET_MAX_HAI_FACTOR = 0x1053,
+ XSC_CMD_OP_IOCTL_SET_SCALE = 0x1054,
+
+ XSC_CMD_OP_IOCTL_SET_HWC = 0x1060,
+ XSC_CMD_OP_IOCTL_GET_HWC = 0x1061,
+
+ XSC_CMD_OP_SET_MTU = 0x1100,
+ XSC_CMD_OP_QUERY_ETH_MAC = 0X1101,
+
+ XSC_CMD_OP_QUERY_HW_STATS = 0X1200,
+ XSC_CMD_OP_QUERY_PAUSE_CNT = 0X1201,
+ XSC_CMD_OP_IOCTL_QUERY_PFC_STALL_STATS = 0x1202,
+ XSC_CMD_OP_QUERY_HW_STATS_RDMA = 0X1203,
+ XSC_CMD_OP_QUERY_HW_STATS_ETH = 0X1204,
+ XSC_CMD_OP_QUERY_HW_GLOBAL_STATS = 0X1210,
+
+ XSC_CMD_OP_SET_RTT_EN = 0X1220,
+ XSC_CMD_OP_GET_RTT_EN = 0X1221,
+ XSC_CMD_OP_SET_RTT_QPN = 0X1222,
+ XSC_CMD_OP_GET_RTT_QPN = 0X1223,
+ XSC_CMD_OP_SET_RTT_PERIOD = 0X1224,
+ XSC_CMD_OP_GET_RTT_PERIOD = 0X1225,
+ XSC_CMD_OP_GET_RTT_RESULT = 0X1226,
+ XSC_CMD_OP_GET_RTT_STATS = 0X1227,
+
+ XSC_CMD_OP_SET_LED_STATUS = 0X1228,
+
+ XSC_CMD_OP_AP_FEAT = 0x1400,
+ XSC_CMD_OP_PCIE_LAT_FEAT = 0x1401,
+
+ XSC_CMD_OP_GET_LLDP_STATUS = 0x1500,
+ XSC_CMD_OP_SET_LLDP_STATUS = 0x1501,
+
+ XSC_CMD_OP_SET_VPORT_RATE_LIMIT = 0x1600,
+
+ XSC_CMD_OP_SET_PORT_ADMIN_STATUS = 0x1801,
+ XSC_CMD_OP_USER_EMU_CMD = 0x8000,
+
+ XSC_CMD_OP_MAX
+};
+
+enum {
+ XSC_CMD_EVENT_RESP_CHANGE_LINK = 0x0001,
+ XSC_CMD_EVENT_RESP_TEMP_WARN = 0x0002,
+ XSC_CMD_EVENT_RESP_OVER_TEMP_PROTECTION = 0x0004,
+};
+
+enum xsc_eth_qp_num_sel {
+ XSC_ETH_QP_NUM_8K_SEL = 0,
+ XSC_ETH_QP_NUM_8K_8TC_SEL,
+ XSC_ETH_QP_NUM_SEL_MAX,
+};
+
+enum xsc_eth_vf_num_sel {
+ XSC_ETH_VF_NUM_SEL_8 = 0,
+ XSC_ETH_VF_NUM_SEL_16,
+ XSC_ETH_VF_NUM_SEL_32,
+ XSC_ETH_VF_NUM_SEL_64,
+ XSC_ETH_VF_NUM_SEL_128,
+ XSC_ETH_VF_NUM_SEL_256,
+ XSC_ETH_VF_NUM_SEL_512,
+ XSC_ETH_VF_NUM_SEL_1024,
+ XSC_ETH_VF_NUM_SEL_MAX
+};
+
+enum {
+ LINKSPEED_MODE_UNKNOWN = -1,
+ LINKSPEED_MODE_10G = 10000,
+ LINKSPEED_MODE_25G = 25000,
+ LINKSPEED_MODE_40G = 40000,
+ LINKSPEED_MODE_50G = 50000,
+ LINKSPEED_MODE_100G = 100000,
+ LINKSPEED_MODE_200G = 200000,
+ LINKSPEED_MODE_400G = 400000,
+};
+
+enum {
+ MODULE_SPEED_UNKNOWN,
+ MODULE_SPEED_10G,
+ MODULE_SPEED_25G,
+ MODULE_SPEED_40G_R4,
+ MODULE_SPEED_50G_R,
+ MODULE_SPEED_50G_R2,
+ MODULE_SPEED_100G_R2,
+ MODULE_SPEED_100G_R4,
+ MODULE_SPEED_200G_R4,
+ MODULE_SPEED_200G_R8,
+ MODULE_SPEED_400G_R8,
+};
+
+enum xsc_dma_direct {
+ DMA_DIR_TO_MAC,
+ DMA_DIR_READ,
+ DMA_DIR_WRITE,
+ DMA_DIR_LOOPBACK,
+ DMA_DIR_MAX,
+};
+
+/* hw feature bitmap, 32bit */
+enum xsc_hw_feature_flag {
+ XSC_HW_RDMA_SUPPORT = 0x1,
+ XSC_HW_PFC_PRIO_STATISTIC_SUPPORT = 0x2,
+ XSC_HW_THIRD_FEATURE = 0x4,
+ XSC_HW_PFC_STALL_STATS_SUPPORT = 0x8,
+ XSC_HW_RDMA_CM_SUPPORT = 0x20,
+
+ XSC_HW_LAST_FEATURE = 0x80000000,
+};
+
+enum xsc_lldp_dcbx_sub_cmd {
+ XSC_OS_HANDLE_LLDP_STATUS = 0x1,
+ XSC_DCBX_STATUS
+};
+
+struct xsc_inbox_hdr {
+ __be16 opcode;
+ u8 rsvd[4];
+ __be16 ver;
+};
+
+struct xsc_outbox_hdr {
+ u8 status;
+ u8 rsvd[5];
+ __be16 ver;
+};
+
+struct xsc_alloc_ia_lock_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 lock_num;
+ u8 rsvd[7];
+};
+
+#define XSC_RES_NUM_IAE_GRP 16
+
+struct xsc_alloc_ia_lock_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 lock_idx[XSC_RES_NUM_IAE_GRP];
+};
+
+struct xsc_release_ia_lock_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 lock_idx[XSC_RES_NUM_IAE_GRP];
+};
+
+struct xsc_release_ia_lock_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_pci_driver_init_params_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 s_wqe_mode;
+ __be32 r_wqe_mode;
+ __be32 local_timeout_retrans;
+ u8 mac_lossless_prio[XSC_MAX_MAC_NUM];
+ __be32 group_mod;
+};
+
+struct xsc_pci_driver_init_params_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+/*CQ mbox*/
+struct xsc_cq_context {
+ __be16 eqn;
+ __be16 pa_num;
+ __be16 glb_func_id;
+ u8 log_cq_sz;
+ u8 cq_type;
+};
+
+struct xsc_create_cq_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_cq_context ctx;
+ __be64 pas[];
+};
+
+struct xsc_create_cq_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 cqn;
+ u8 rsvd[4];
+};
+
+struct xsc_destroy_cq_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 cqn;
+ u8 rsvd[4];
+};
+
+struct xsc_destroy_cq_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+/*QP mbox*/
+struct xsc_create_qp_request {
+ __be16 input_qpn;
+ __be16 pa_num;
+ u8 qp_type;
+ u8 log_sq_sz;
+ u8 log_rq_sz;
+ u8 dma_direct;//0 for dma read, 1 for dma write
+ __be32 pdn;
+ __be16 cqn_send;
+ __be16 cqn_recv;
+ __be16 glb_funcid;
+ /*rsvd,rename logic_port used to transfer logical_port to fw*/
+ u8 rsvd[2];
+ __be64 pas[];
+};
+
+struct xsc_create_qp_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_create_qp_request req;
+};
+
+struct xsc_create_qp_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 qpn;
+ u8 rsvd[4];
+};
+
+struct xsc_destroy_qp_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 qpn;
+ u8 rsvd[4];
+};
+
+struct xsc_destroy_qp_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_query_qp_flush_status_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 qpn;
+};
+
+struct xsc_query_qp_flush_status_mbox_out {
+ struct xsc_outbox_hdr hdr;
+};
+
+struct xsc_qp_context {
+ __be32 remote_qpn;
+ __be32 cqn_send;
+ __be32 cqn_recv;
+ __be32 next_send_psn;
+ __be32 next_recv_psn;
+ __be32 pdn;
+ __be16 src_udp_port;
+ __be16 path_id;
+ u8 mtu_mode;
+ u8 lag_sel;
+ u8 lag_sel_en;
+ u8 retry_cnt;
+ u8 rnr_retry;
+ u8 dscp;
+ u8 state;
+ u8 hop_limit;
+ u8 dmac[6];
+ u8 smac[6];
+ __be32 dip[4];
+ __be32 sip[4];
+ __be16 ip_type;
+ __be16 grp_id;
+ u8 vlan_valid;
+ u8 dci_cfi_prio_sl;
+ __be16 vlan_id;
+ u8 qp_out_port;
+ u8 pcie_no;
+ __be16 lag_id;
+ __be16 func_id;
+ __be16 rsvd;
+};
+
+struct xsc_query_qp_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 qpn;
+ u8 rsvd[4];
+};
+
+struct xsc_query_qp_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_qp_context ctx;
+};
+
+struct xsc_modify_qp_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 qpn;
+ struct xsc_qp_context ctx;
+ u8 no_need_wait;
+};
+
+struct xsc_modify_qp_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_create_multiqp_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 qp_num;
+ u8 qp_type;
+ u8 rsvd;
+ __be32 req_len;
+ u8 data[];
+};
+
+struct xsc_create_multiqp_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 qpn_base;
+};
+
+struct xsc_alloc_multi_virtq_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 qp_or_cq_num;
+ __be16 pa_num;
+ __be32 rsvd;
+ __be32 rsvd2;
+};
+
+struct xsc_alloc_multi_virtq_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 qnum_base;
+ __be32 pa_list_base;
+ __be32 rsvd;
+};
+
+struct xsc_release_multi_virtq_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 qp_or_cq_num;
+ __be16 pa_num;
+ __be32 qnum_base;
+ __be32 pa_list_base;
+};
+
+struct xsc_release_multi_virtq_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 rsvd;
+ __be32 rsvd2;
+ __be32 rsvd3;
+};
+
+/* MSIX TABLE mbox */
+struct xsc_msix_table_info_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 index;
+ u8 rsvd[6];
+};
+
+struct xsc_msix_table_info_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 addr_lo;
+ __be32 addr_hi;
+ __be32 data;
+};
+
+/*EQ mbox*/
+struct xsc_eq_context {
+ __be16 vecidx;
+ __be16 pa_num;
+ u8 log_eq_sz;
+ __be16 glb_func_id;
+ u8 is_async_eq;
+ u8 rsvd;
+};
+
+struct xsc_create_eq_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_eq_context ctx;
+ __be64 pas[];
+};
+
+struct xsc_create_eq_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 eqn;
+ u8 rsvd[4];
+};
+
+struct xsc_destroy_eq_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 eqn;
+ u8 rsvd[4];
+
+};
+
+struct xsc_destroy_eq_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+/*PD mbox*/
+struct xsc_alloc_pd_request {
+ u8 rsvd[8];
+};
+
+struct xsc_alloc_pd_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_alloc_pd_request req;
+};
+
+struct xsc_alloc_pd_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 pdn;
+ u8 rsvd[4];
+};
+
+struct xsc_dealloc_pd_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 pdn;
+ u8 rsvd[4];
+
+};
+
+struct xsc_dealloc_pd_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+/*MR mbox*/
+struct xsc_register_mr_request {
+ __be32 pdn;
+ __be32 pa_num;
+ __be32 len;
+ __be32 mkey;
+ u8 rsvd;
+ u8 acc;
+ u8 page_mode;
+ u8 map_en;
+ __be64 va_base;
+ __be64 pas[];
+};
+
+struct xsc_register_mr_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_register_mr_request req;
+};
+
+struct xsc_register_mr_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 mkey;
+ u8 rsvd[4];
+};
+
+struct xsc_unregister_mr_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 mkey;
+ u8 rsvd[4];
+};
+
+struct xsc_unregister_mr_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_mpt_item {
+ __be32 pdn;
+ __be32 pa_num;
+ __be32 len;
+ __be32 mkey;
+ u8 rsvd[5];
+ u8 acc;
+ u8 page_mode;
+ u8 map_en;
+ __be64 va_base;
+};
+
+struct xsc_set_mpt_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_mpt_item mpt_item;
+};
+
+struct xsc_set_mpt_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 mtt_base;
+ u8 rsvd[4];
+};
+
+struct xsc_mtt_setting {
+ __be32 mtt_base;
+ __be32 pa_num;
+ __be64 pas[];
+};
+
+struct xsc_set_mtt_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_mtt_setting mtt_setting;
+};
+
+struct xsc_set_mtt_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_create_mkey_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[4];
+};
+
+struct xsc_create_mkey_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 mkey;
+};
+
+struct xsc_destroy_mkey_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 mkey;
+};
+
+struct xsc_destroy_mkey_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd;
+};
+
+struct xsc_access_reg_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd0[2];
+ __be16 register_id;
+ __be32 arg;
+ __be32 data[];
+};
+
+struct xsc_access_reg_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+ __be32 data[];
+};
+
+struct xsc_mad_ifc_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 remote_lid;
+ u8 rsvd0;
+ u8 port;
+ u8 rsvd1[4];
+ u8 data[256];
+};
+
+struct xsc_mad_ifc_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+ u8 data[256];
+};
+
+struct xsc_query_eq_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd0[3];
+ u8 eqn;
+ u8 rsvd1[4];
+};
+
+struct xsc_query_eq_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+ struct xsc_eq_context ctx;
+};
+
+struct xsc_query_cq_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 cqn;
+ u8 rsvd0[4];
+};
+
+struct xsc_query_cq_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd0[8];
+ struct xsc_cq_context ctx;
+ u8 rsvd6[16];
+ __be64 pas[];
+};
+
+struct xsc_cmd_query_cmdq_ver_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_cmd_query_cmdq_ver_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be16 cmdq_ver;
+ u8 rsvd[6];
+};
+
+struct xsc_cmd_dummy_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_cmd_dummy_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_fw_version {
+ u8 fw_version_major;
+ u8 fw_version_minor;
+ __be16 fw_version_patch;
+ __be32 fw_version_tweak;
+ u8 fw_version_extra_flag;
+ u8 rsvd[7];
+};
+
+struct xsc_hca_cap {
+ u8 rsvd1[12];
+ u8 send_seg_num;
+ u8 send_wqe_shift;
+ u8 recv_seg_num;
+ u8 recv_wqe_shift;
+ u8 log_max_srq_sz;
+ u8 log_max_qp_sz;
+ u8 log_max_mtt;
+ u8 log_max_qp;
+ u8 log_max_strq_sz;
+ u8 log_max_srqs;
+ u8 rsvd4[2];
+ u8 log_max_tso;
+ u8 log_max_cq_sz;
+ u8 rsvd6;
+ u8 log_max_cq;
+ u8 log_max_eq_sz;
+ u8 log_max_mkey;
+ u8 log_max_msix;
+ u8 log_max_eq;
+ u8 max_indirection;
+ u8 log_max_mrw_sz;
+ u8 log_max_bsf_list_sz;
+ u8 log_max_klm_list_sz;
+ u8 rsvd_8_0;
+ u8 log_max_ra_req_dc;
+ u8 rsvd_8_1;
+ u8 log_max_ra_res_dc;
+ u8 rsvd9;
+ u8 log_max_ra_req_qp;
+ u8 log_max_qp_depth;
+ u8 log_max_ra_res_qp;
+ __be16 max_vfs;
+ __be16 raweth_qp_id_end;
+ __be16 raw_tpe_qp_num;
+ __be16 max_qp_count;
+ __be16 raweth_qp_id_base;
+ u8 rsvd13;
+ u8 local_ca_ack_delay;
+ u8 max_num_eqs;
+ u8 num_ports;
+ u8 log_max_msg;
+ u8 mac_port;
+ __be16 raweth_rss_qp_id_base;
+ __be16 stat_rate_support;
+ u8 rsvd16[2];
+ __be64 flags;
+ u8 rsvd17;
+ u8 uar_sz;
+ u8 rsvd18;
+ u8 log_pg_sz;
+ __be16 bf_log_bf_reg_size;
+ __be16 msix_base;
+ __be16 msix_num;
+ __be16 max_desc_sz_sq;
+ u8 rsvd20[2];
+ __be16 max_desc_sz_rq;
+ u8 rsvd21[2];
+ __be16 max_desc_sz_sq_dc;
+ u8 rsvd22[4];
+ __be16 max_qp_mcg;
+ u8 rsvd23;
+ u8 log_max_mcg;
+ u8 rsvd24;
+ u8 log_max_pd;
+ u8 rsvd25;
+ u8 log_max_xrcd;
+ u8 rsvd26[40];
+ __be32 uar_page_sz;
+ u8 rsvd27[8];
+ __be32 hw_feature_flag;/*enum xsc_hw_feature_flag*/
+ __be16 pf0_vf_funcid_base;
+ __be16 pf0_vf_funcid_top;
+ __be16 pf1_vf_funcid_base;
+ __be16 pf1_vf_funcid_top;
+ __be16 pcie0_pf_funcid_base;
+ __be16 pcie0_pf_funcid_top;
+ __be16 pcie1_pf_funcid_base;
+ __be16 pcie1_pf_funcid_top;
+ u8 log_msx_atomic_size_qp;
+ u8 pcie_host;
+ u8 rsvd28;
+ u8 log_msx_atomic_size_dc;
+ u8 board_sn[XSC_BOARD_SN_LEN];
+ u8 max_tc;
+ u8 mac_bit;
+ __be16 funcid_to_logic_port;
+ u8 rsvd29[6];
+ u8 nif_port_num;
+ u8 reg_mr_via_cmdq;
+ __be32 hca_core_clock;
+ __be32 max_rwq_indirection_tables;/*rss_caps*/
+ __be32 max_rwq_indirection_table_size;/*rss_caps*/
+ __be32 chip_ver_h;
+ __be32 chip_ver_m;
+ __be32 chip_ver_l;
+ __be32 hotfix_num;
+ __be32 feature_flag;
+ __be32 rx_pkt_len_max;
+ __be32 glb_func_id;
+ __be64 tx_db;
+ __be64 rx_db;
+ __be64 complete_db;
+ __be64 complete_reg;
+ __be64 event_db;
+ __be32 qp_rate_limit_min;
+ __be32 qp_rate_limit_max;
+ struct xsc_fw_version fw_ver;
+ u8 lag_logic_port_ofst;
+};
+
+struct xsc_cmd_query_hca_cap_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 cpu_num;
+ u8 rsvd[6];
+};
+
+struct xsc_cmd_query_hca_cap_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd0[8];
+ struct xsc_hca_cap hca_cap;
+};
+
+struct xsc_cmd_enable_hca_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 vf_num;
+ __be16 max_msix_vec;
+ __be16 cpu_num;
+ u8 pp_bypass;
+ u8 esw_mode;
+};
+
+struct xsc_cmd_enable_hca_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd0[4];
+};
+
+struct xsc_cmd_disable_hca_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 vf_num;
+ u8 pp_bypass;
+ u8 esw_mode;
+};
+
+struct xsc_cmd_disable_hca_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd0[4];
+};
+
+struct xsc_cmd_modify_hca_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 pp_bypass;
+ u8 esw_mode;
+ u8 rsvd0[6];
+};
+
+struct xsc_cmd_modify_hca_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd0[4];
+};
+
+struct xsc_query_special_ctxs_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_query_special_ctxs_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 dump_fill_mkey;
+ __be32 reserved_lkey;
+};
+
+/* vport mbox */
+struct xsc_nic_vport_context {
+ __be32 min_wqe_inline_mode:3;
+ __be32 disable_mc_local_lb:1;
+ __be32 disable_uc_local_lb:1;
+ __be32 roce_en:1;
+
+ __be32 arm_change_event:1;
+ __be32 event_on_mtu:1;
+ __be32 event_on_promisc_change:1;
+ __be32 event_on_vlan_change:1;
+ __be32 event_on_mc_address_change:1;
+ __be32 event_on_uc_address_change:1;
+ __be32 affiliation_criteria:4;
+ __be32 affiliated_vhca_id;
+
+ __be16 mtu;
+
+ __be64 system_image_guid;
+ __be64 port_guid;
+ __be64 node_guid;
+
+ __be32 qkey_violation_counter;
+
+ __be16 spoofchk:1;
+ __be16 trust:1;
+ __be16 promisc:1;
+ __be16 allmcast:1;
+ __be16 vlan_allowed:1;
+ __be16 allowed_list_type:3;
+ __be16 allowed_list_size:10;
+
+ __be16 vlan_proto;
+ __be16 vlan;
+ u8 qos;
+ u8 permanent_address[6];
+ u8 current_address[6];
+ u8 current_uc_mac_address[0][2];
+};
+
+enum {
+ XSC_HCA_VPORT_SEL_PORT_GUID = 1 << 0,
+ XSC_HCA_VPORT_SEL_NODE_GUID = 1 << 1,
+ XSC_HCA_VPORT_SEL_STATE_POLICY = 1 << 2,
+};
+
+struct xsc_hca_vport_context {
+ u32 field_select;
+ u32 port_physical_state:4;
+ u32 vport_state_policy:4;
+ u32 port_state:4;
+ u32 vport_state:4;
+ u32 rcvd0:16;
+
+ u64 system_image_guid;
+ u64 port_guid;
+ u64 node_guid;
+
+ u16 qkey_violation_counter;
+ u16 pkey_violation_counter;
+};
+
+struct xsc_query_nic_vport_context_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_nic_vport_context nic_vport_ctx;
+};
+
+struct xsc_query_nic_vport_context_in {
+ struct xsc_inbox_hdr hdr;
+ u32 other_vport:1;
+ u32 vport_number:16;
+ u32 allowed_list_type:3;
+ u32 rsvd:12;
+};
+
+struct xsc_modify_nic_vport_context_out {
+ struct xsc_outbox_hdr hdr;
+ __be16 outer_vlan_id;
+ u8 rsvd[2];
+};
+
+struct xsc_modify_nic_vport_field_select {
+ __be32 affiliation:1;
+ __be32 disable_uc_local_lb:1;
+ __be32 disable_mc_local_lb:1;
+ __be32 node_guid:1;
+ __be32 port_guid:1;
+ __be32 min_inline:1;
+ __be32 mtu:1;
+ __be32 change_event:1;
+ __be32 promisc:1;
+ __be32 allmcast:1;
+ __be32 permanent_address:1;
+ __be32 current_address:1;
+ __be32 addresses_list:1;
+ __be32 roce_en:1;
+ __be32 spoofchk:1;
+ __be32 trust:1;
+ __be32 rsvd:16;
+};
+
+struct xsc_modify_nic_vport_context_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 other_vport:1;
+ __be32 vport_number:16;
+ __be32 rsvd:15;
+ __be16 caps;
+ __be16 caps_mask;
+ __be16 lag_id;
+
+ struct xsc_modify_nic_vport_field_select field_select;
+ struct xsc_nic_vport_context nic_vport_ctx;
+};
+
+struct xsc_query_hca_vport_context_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_hca_vport_context hca_vport_ctx;
+};
+
+struct xsc_query_hca_vport_context_in {
+ struct xsc_inbox_hdr hdr;
+ u32 other_vport:1;
+ u32 port_num:4;
+ u32 vport_number:16;
+ u32 rsvd0:11;
+};
+
+struct xsc_modify_hca_vport_context_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[2];
+};
+
+struct xsc_modify_hca_vport_context_in {
+ struct xsc_inbox_hdr hdr;
+ u32 other_vport:1;
+ u32 port_num:4;
+ u32 vport_number:16;
+ u32 rsvd0:11;
+
+ struct xsc_hca_vport_context hca_vport_ctx;
+};
+
+struct xsc_array128 {
+ u8 array128[16];
+};
+
+struct xsc_query_hca_vport_gid_out {
+ struct xsc_outbox_hdr hdr;
+ u16 gids_num;
+ struct xsc_array128 gid[];
+};
+
+struct xsc_query_hca_vport_gid_in {
+ struct xsc_inbox_hdr hdr;
+ u32 other_vport:1;
+ u32 port_num:4;
+ u32 vport_number:16;
+ u32 rsvd0:11;
+ u16 gid_index;
+};
+
+struct xsc_pkey {
+ u16 pkey;
+};
+
+struct xsc_query_hca_vport_pkey_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_pkey pkey[];
+};
+
+struct xsc_query_hca_vport_pkey_in {
+ struct xsc_inbox_hdr hdr;
+ u32 other_vport:1;
+ u32 port_num:4;
+ u32 vport_number:16;
+ u32 rsvd0:11;
+ u16 pkey_index;
+};
+
+struct xsc_query_vport_state_out {
+ struct xsc_outbox_hdr hdr;
+ u8 admin_state:4;
+ u8 state:4;
+};
+
+struct xsc_query_vport_state_in {
+ struct xsc_inbox_hdr hdr;
+ u32 other_vport:1;
+ u32 vport_number:16;
+ u32 rsvd0:15;
+};
+
+struct xsc_modify_vport_state_out {
+ struct xsc_outbox_hdr hdr;
+};
+
+struct xsc_modify_vport_state_in {
+ struct xsc_inbox_hdr hdr;
+ u32 other_vport:1;
+ u32 vport_number:16;
+ u32 rsvd0:15;
+ u8 admin_state:4;
+ u8 rsvd1:4;
+};
+
+struct xsc_traffic_counter {
+ u64 packets;
+ u64 bytes;
+};
+
+struct xsc_query_vport_counter_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_traffic_counter received_errors;
+ struct xsc_traffic_counter transmit_errors;
+ struct xsc_traffic_counter received_ib_unicast;
+ struct xsc_traffic_counter transmitted_ib_unicast;
+ struct xsc_traffic_counter received_ib_multicast;
+ struct xsc_traffic_counter transmitted_ib_multicast;
+ struct xsc_traffic_counter received_eth_broadcast;
+ struct xsc_traffic_counter transmitted_eth_broadcast;
+ struct xsc_traffic_counter received_eth_unicast;
+ struct xsc_traffic_counter transmitted_eth_unicast;
+ struct xsc_traffic_counter received_eth_multicast;
+ struct xsc_traffic_counter transmitted_eth_multicast;
+};
+
+struct xsc_query_vport_counter_in {
+ struct xsc_inbox_hdr hdr;
+ u32 other_vport:1;
+ u32 port_num:4;
+ u32 vport_number:16;
+ u32 rsvd0:11;
+};
+
+/* ioctl mbox */
+struct xsc_ioctl_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 len;
+ __be16 rsvd;
+ u8 data[];
+};
+
+struct xsc_ioctl_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 error;
+ __be16 len;
+ __be16 rsvd;
+ u8 data[];
+};
+
+struct xsc_modify_raw_qp_request {
+ u16 qpn;
+ u16 lag_id;
+ u16 func_id;
+ u8 dma_direct;
+ u8 prio;
+ u8 qp_out_port;
+ u8 rsvd[7];
+};
+
+struct xsc_modify_raw_qp_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 pcie_no;
+ u8 rsv[7];
+ struct xsc_modify_raw_qp_request req;
+};
+
+struct xsc_modify_raw_qp_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+#define ETH_ALEN 6
+
+struct xsc_create_lag_request {
+ __be16 lag_id;
+ u8 lag_type;
+ u8 lag_sel_mode;
+ u8 mac_idx;
+ u8 netdev_addr[ETH_ALEN];
+ u8 bond_mode;
+ u8 slave_status;
+};
+
+struct xsc_add_lag_member_request {
+ __be16 lag_id;
+ u8 lag_type;
+ u8 lag_sel_mode;
+ u8 mac_idx;
+ u8 netdev_addr[ETH_ALEN];
+ u8 bond_mode;
+ u8 slave_status;
+ u8 mad_mac_idx;
+};
+
+struct xsc_remove_lag_member_request {
+ __be16 lag_id;
+ u8 lag_type;
+ u8 mac_idx;
+ u8 mad_mac_idx;
+ u8 bond_mode;
+ u8 is_roce_lag_xdev;
+ u8 not_roce_lag_xdev_mask;
+};
+
+struct xsc_update_lag_member_status_request {
+ __be16 lag_id;
+ u8 lag_type;
+ u8 mac_idx;
+ u8 bond_mode;
+ u8 slave_status;
+ u8 rsvd;
+};
+
+struct xsc_update_lag_hash_type_request {
+ __be16 lag_id;
+ u8 lag_sel_mode;
+ u8 rsvd[5];
+};
+
+struct xsc_destroy_lag_request {
+ __be16 lag_id;
+ u8 lag_type;
+ u8 mac_idx;
+ u8 bond_mode;
+ u8 slave_status;
+ u8 rsvd[3];
+};
+
+struct xsc_set_lag_qos_request {
+ __be16 lag_id;
+ u8 member_idx;
+ u8 lag_op;
+ u8 resv[4];
+};
+
+struct xsc_create_lag_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_create_lag_request req;
+};
+
+struct xsc_create_lag_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_add_lag_member_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_add_lag_member_request req;
+};
+
+struct xsc_add_lag_member_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_remove_lag_member_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_remove_lag_member_request req;
+};
+
+struct xsc_remove_lag_member_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_update_lag_member_status_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_update_lag_member_status_request req;
+};
+
+struct xsc_update_lag_member_status_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_update_lag_hash_type_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_update_lag_hash_type_request req;
+};
+
+struct xsc_update_lag_hash_type_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_destroy_lag_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_destroy_lag_request req;
+};
+
+struct xsc_destroy_lag_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_set_lag_qos_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_set_lag_qos_request req;
+};
+
+struct xsc_set_lag_qos_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+/*ioctl qos*/
+struct xsc_qos_req_prfx {
+ u8 mac_port;
+ u8 rsvd[7];
+};
+
+struct xsc_qos_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_qos_req_prfx req_prfx;
+ u8 data[];
+};
+
+struct xsc_qos_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 data[];
+};
+
+struct xsc_prio_stats {
+ u64 tx_bytes;
+ u64 rx_bytes;
+ u64 tx_pkts;
+ u64 rx_pkts;
+};
+
+struct xsc_prio_stats_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 pport;
+};
+
+struct xsc_prio_stats_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_prio_stats prio_stats[QOS_PRIO_MAX + 1];
+};
+
+struct xsc_pfc_prio_stats {
+ u64 tx_pause;
+ u64 tx_pause_duration;
+ u64 rx_pause;
+ u64 rx_pause_duration;
+};
+
+struct xsc_pfc_prio_stats_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 pport;
+};
+
+struct xsc_pfc_prio_stats_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_pfc_prio_stats prio_stats[QOS_PRIO_MAX + 1];
+};
+
+struct xsc_hw_stats_rdma_pf {
+ /*by mac port*/
+ u64 rdma_tx_pkts;
+ u64 rdma_tx_bytes;
+ u64 rdma_rx_pkts;
+ u64 rdma_rx_bytes;
+ u64 np_cnp_sent;
+ u64 rp_cnp_handled;
+ u64 np_ecn_marked_roce_packets;
+ u64 rp_cnp_ignored;
+ u64 read_rsp_out_of_seq;
+ u64 implied_nak_seq_err;
+ /*by function*/
+ u64 out_of_sequence;
+ u64 packet_seq_err;
+ u64 out_of_buffer;
+ u64 rnr_nak_retry_err;
+ u64 local_ack_timeout_err;
+ u64 rx_read_requests;
+ u64 rx_write_requests;
+ u64 duplicate_requests;
+ u64 rdma_tx_pkts_func;
+ u64 rdma_tx_payload_bytes;
+ u64 rdma_rx_pkts_func;
+ u64 rdma_rx_payload_bytes;
+ /*global*/
+ u64 rdma_loopback_pkts;
+ u64 rdma_loopback_bytes;
+};
+
+struct xsc_hw_stats_rdma_vf {
+ /*by function*/
+ u64 rdma_tx_pkts_func;
+ u64 rdma_tx_payload_bytes;
+ u64 rdma_rx_pkts_func;
+ u64 rdma_rx_payload_bytes;
+
+ u64 out_of_sequence;
+ u64 packet_seq_err;
+ u64 out_of_buffer;
+ u64 rnr_nak_retry_err;
+ u64 local_ack_timeout_err;
+ u64 rx_read_requests;
+ u64 rx_write_requests;
+ u64 duplicate_requests;
+};
+
+struct xsc_hw_stats_rdma {
+ u8 is_pf;
+ u8 rsv[3];
+ union {
+ struct xsc_hw_stats_rdma_pf pf_stats;
+ struct xsc_hw_stats_rdma_vf vf_stats;
+ } stats;
+};
+
+struct xsc_hw_stats_eth_pf {
+ /*by mac port*/
+ u64 rdma_tx_pkts;
+ u64 rdma_tx_bytes;
+ u64 rdma_rx_pkts;
+ u64 rdma_rx_bytes;
+ u64 tx_pause;
+ u64 rx_pause;
+ u64 rx_fcs_errors;
+ u64 rx_discards;
+ u64 tx_multicast_phy;
+ u64 tx_broadcast_phy;
+ u64 rx_multicast_phy;
+ u64 rx_broadcast_phy;
+ /*by global*/
+ u64 rdma_loopback_pkts;
+ u64 rdma_loopback_bytes;
+};
+
+struct xsc_hw_stats_eth_vf {
+ /*by function*/
+ u64 rdma_tx_pkts;
+ u64 rdma_tx_bytes;
+ u64 rdma_rx_pkts;
+ u64 rdma_rx_bytes;
+};
+
+struct xsc_hw_stats_eth {
+ u8 is_pf;
+ u8 rsv[3];
+ union {
+ struct xsc_hw_stats_eth_pf pf_stats;
+ struct xsc_hw_stats_eth_vf vf_stats;
+ } stats;
+};
+
+struct xsc_hw_stats_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 mac_port;
+ u8 is_lag;
+ u8 lag_member_num;
+ u8 member_port[];
+};
+
+struct xsc_hw_stats_rdma_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_hw_stats_rdma hw_stats;
+};
+
+struct xsc_hw_stats_eth_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_hw_stats_eth hw_stats;
+};
+
+struct xsc_hw_global_stats_rdma {
+ /*by global*/
+ u64 rdma_loopback_pkts;
+ u64 rdma_loopback_bytes;
+ u64 rx_icrc_encapsulated;
+ u64 req_cqe_error;
+ u64 resp_cqe_error;
+ u64 cqe_msg_code_error;
+};
+
+struct xsc_hw_global_stats_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsv[4];
+};
+
+struct xsc_hw_global_stats_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_hw_global_stats_rdma hw_stats;
+};
+
+struct xsc_pfc_stall_stats {
+ /*by mac port*/
+ u64 tx_pause_storm_triggered;
+};
+
+struct xsc_pfc_stall_stats_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 mac_port;
+};
+
+struct xsc_pfc_stall_stats_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_pfc_stall_stats pfc_stall_stats;
+};
+
+struct xsc_dscp_pmt_set {
+ u8 dscp;
+ u8 priority;
+ u8 rsvd[6];
+};
+
+struct xsc_dscp_pmt_get {
+ u8 prio_map[QOS_DSCP_MAX + 1];
+ u8 max_prio;
+ u8 rsvd[7];
+};
+
+struct xsc_trust_mode_set {
+ u8 is_pcp;
+ u8 rsvd[7];
+};
+
+struct xsc_trust_mode_get {
+ u8 is_pcp;
+ u8 rsvd[7];
+};
+
+struct xsc_pcp_pmt_set {
+ u8 pcp;
+ u8 priority;
+ u8 rsvd[6];
+};
+
+struct xsc_pcp_pmt_get {
+ u8 prio_map[QOS_PCP_MAX + 1];
+ u8 max_prio;
+ u8 rsvd[7];
+};
+
+struct xsc_default_pri_set {
+ u8 priority;
+ u8 rsvd[7];
+};
+
+struct xsc_default_pri_get {
+ u8 priority;
+ u8 rsvd[7];
+};
+
+#define PFC_WATCHDOG_EN_OFF 0
+#define PFC_WATCHDOG_EN_ON 1
+struct xsc_watchdog_en_set {
+ u8 en;
+};
+
+struct xsc_watchdog_en_get {
+ u8 en;
+};
+
+#define PFC_WATCHDOG_PERIOD_MIN 1
+#define PFC_WATCHDOG_PERIOD_MAX 4000000
+struct xsc_watchdog_period_set {
+ u32 period;
+};
+
+struct xsc_watchdog_period_get {
+ u32 period;
+};
+
+struct xsc_event_resp {
+ u8 resp_cmd_type; /* bitmap:0x0001: link up/down */
+};
+
+struct xsc_event_linkstatus_resp {
+ u8 linkstatus; /*0:down, 1:up*/
+};
+
+struct xsc_event_linkinfo {
+ u8 linkstatus; /*0:down, 1:up*/
+ u8 port;
+ u8 duplex;
+ u8 autoneg;
+ u32 linkspeed;
+ u64 supported;
+ u64 advertising;
+ u64 supported_fec; /* reserved, not support currently */
+ u64 advertised_fec; /* reserved, not support currently */
+ u64 supported_speed[2];
+ u64 advertising_speed[2];
+};
+
+struct xsc_lldp_status_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 os_handle_lldp;
+ u8 sub_type;
+};
+
+struct xsc_lldp_status_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ union {
+ __be32 os_handle_lldp;
+ __be32 dcbx_status;
+ } status;
+};
+
+struct xsc_vport_rate_limit_mobox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 other_vport;
+ __be16 vport_number;
+ __be16 rsvd0;
+ __be32 rate;
+};
+
+struct xsc_vport_rate_limit_mobox_out {
+ struct xsc_outbox_hdr hdr;
+};
+
+struct xsc_event_query_type_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[2];
+};
+
+struct xsc_event_query_type_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_event_resp ctx;
+};
+
+struct xsc_event_query_linkstatus_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[2];
+};
+
+struct xsc_event_query_linkstatus_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_event_linkstatus_resp ctx;
+};
+
+struct xsc_event_query_linkinfo_mbox_in {
+ struct xsc_inbox_hdr hdr;
+};
+
+struct xsc_event_query_linkinfo_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct xsc_event_linkinfo ctx;
+};
+
+struct xsc_event_modify_linkinfo_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_event_linkinfo ctx;
+};
+
+struct xsc_event_modify_linkinfo_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u32 status;
+};
+
+struct xsc_event_set_port_admin_status_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u16 admin_status;
+
+};
+
+struct xsc_event_set_port_admin_status_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u32 status;
+};
+
+struct xsc_event_set_led_status_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 port_id;
+};
+
+struct xsc_event_set_led_status_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u32 status;
+};
+
+struct xsc_event_modify_fecparam_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u32 fec;
+};
+
+struct xsc_event_modify_fecparam_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u32 status;
+};
+
+struct xsc_event_query_fecparam_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[2];
+};
+
+struct xsc_event_query_fecparam_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u32 active_fec;
+ u32 fec_cfg;
+ u32 status;
+};
+
+#define PFC_ON_PG_PRFL_IDX 0
+#define PFC_OFF_PG_PRFL_IDX 1
+#define PFC_ON_QMU_VALUE 0
+#define PFC_OFF_QMU_VALUE 1
+
+#define NIF_PFC_EN_ON 1
+#define NIF_PFC_EN_OFF 0
+
+#define PFC_CFG_CHECK_TIMEOUT_US 8000000
+#define PFC_CFG_CHECK_SLEEP_TIME_US 200
+#define PFC_CFG_CHECK_MAX_RETRY_TIMES \
+ (PFC_CFG_CHECK_TIMEOUT_US / PFC_CFG_CHECK_SLEEP_TIME_US)
+#define PFC_CFG_CHECK_VALID_CNT 3
+
+enum {
+ PFC_OP_ENABLE = 0,
+ PFC_OP_DISABLE,
+ PFC_OP_MODIFY,
+ PFC_OP_TYPE_MAX,
+};
+
+enum {
+ DROP_TH_CLEAR = 0,
+ DROP_TH_RECOVER,
+ DROP_TH_RECOVER_LOSSY,
+ DROP_TH_RECOVER_LOSSLESS,
+};
+
+struct xsc_pfc_cfg {
+ u8 req_prio;
+ u8 req_pfc_en;
+ u8 curr_prio;
+ u8 curr_pfc_en;
+ u8 pfc_op;
+ u8 lossless_num;
+};
+
+#define LOSSLESS_NUM_INVAILD 9
+struct xsc_pfc_set {
+ u8 priority;
+ u8 pfc_on;
+ u8 type;
+ u8 src_prio;
+ u8 lossless_num;
+};
+
+#define PFC_PRIO_MAX 7
+struct xsc_pfc_get {
+ u8 pfc_on[PFC_PRIO_MAX + 1];
+ u8 max_prio;
+};
+
+struct xsc_pfc_set_drop_th_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 prio;
+ u8 cfg_type;
+};
+
+struct xsc_pfc_set_drop_th_mbox_out {
+ struct xsc_outbox_hdr hdr;
+};
+
+struct xsc_pfc_get_cfg_status_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 prio;
+};
+
+struct xsc_pfc_get_cfg_status_mbox_out {
+ struct xsc_outbox_hdr hdr;
+};
+
+struct xsc_rate_limit_set {
+ u32 rate_cir;
+ u32 limit_id;
+ u8 limit_level;
+ u8 rsvd[7];
+};
+
+struct xsc_rate_limit_get {
+ u32 rate_cir[QOS_PRIO_MAX + 1];
+ u32 max_limit_id;
+ u8 limit_level;
+ u8 rsvd[3];
+};
+
+struct xsc_sp_set {
+ u8 sp[QOS_PRIO_MAX + 1];
+};
+
+struct xsc_sp_get {
+ u8 sp[QOS_PRIO_MAX + 1];
+ u8 max_prio;
+ u8 rsvd[7];
+};
+
+struct xsc_weight_set {
+ u8 weight[QOS_PRIO_MAX + 1];
+};
+
+struct xsc_weight_get {
+ u8 weight[QOS_PRIO_MAX + 1];
+ u8 max_prio;
+ u8 rsvd[7];
+};
+
+struct xsc_dpu_port_weight_set {
+ u8 target;
+ u8 weight[DPU_PORT_WGHT_CFG_MAX + 1];
+ u8 rsv[5];
+};
+
+struct xsc_dpu_port_weight_get {
+ u8 weight[DPU_PORT_WGHT_TARGET_NUM][DPU_PORT_WGHT_CFG_MAX + 1];
+ u8 rsvd[4];
+};
+
+struct xsc_dpu_prio_weight_set {
+ u8 target;
+ u8 weight[QOS_PRIO_MAX + 1];
+ u8 rsv[7];
+};
+
+struct xsc_dpu_prio_weight_get {
+ u8 weight[DPU_PRIO_WGHT_TARGET_NUM][QOS_PRIO_MAX + 1];
+};
+
+struct xsc_cc_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 data[];
+};
+
+struct xsc_cc_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 data[];
+};
+
+struct xsc_cc_ctrl_cmd {
+ u16 cmd;
+ u16 len;
+ u8 val[];
+};
+
+struct xsc_cc_cmd_enable_rp {
+ u16 cmd;
+ u16 len;
+ u32 enable;
+ u32 section;
+};
+
+struct xsc_cc_cmd_enable_np {
+ u16 cmd;
+ u16 len;
+ u32 enable;
+ u32 section;
+};
+
+struct xsc_cc_cmd_init_alpha {
+ u16 cmd;
+ u16 len;
+ u32 alpha;
+ u32 section;
+};
+
+struct xsc_cc_cmd_g {
+ u16 cmd;
+ u16 len;
+ u32 g;
+ u32 section;
+};
+
+struct xsc_cc_cmd_ai {
+ u16 cmd;
+ u16 len;
+ u32 ai;
+ u32 section;
+};
+
+struct xsc_cc_cmd_hai {
+ u16 cmd;
+ u16 len;
+ u32 hai;
+ u32 section;
+};
+
+struct xsc_cc_cmd_th {
+ u16 cmd;
+ u16 len;
+ u32 threshold;
+ u32 section;
+};
+
+struct xsc_cc_cmd_bc {
+ u16 cmd;
+ u16 len;
+ u32 bytecount;
+ u32 section;
+};
+
+struct xsc_cc_cmd_cnp_opcode {
+ u16 cmd;
+ u16 len;
+ u32 opcode;
+};
+
+struct xsc_cc_cmd_cnp_bth_b {
+ u16 cmd;
+ u16 len;
+ u32 bth_b;
+};
+
+struct xsc_cc_cmd_cnp_bth_f {
+ u16 cmd;
+ u16 len;
+ u32 bth_f;
+};
+
+struct xsc_cc_cmd_cnp_ecn {
+ u16 cmd;
+ u16 len;
+ u32 ecn;
+};
+
+struct xsc_cc_cmd_data_ecn {
+ u16 cmd;
+ u16 len;
+ u32 ecn;
+};
+
+struct xsc_cc_cmd_cnp_tx_interval {
+ u16 cmd;
+ u16 len;
+ u32 interval; // us
+ u32 section;
+};
+
+struct xsc_cc_cmd_evt_rsttime {
+ u16 cmd;
+ u16 len;
+ u32 period;
+};
+
+struct xsc_cc_cmd_cnp_dscp {
+ u16 cmd;
+ u16 len;
+ u32 dscp;
+ u32 section;
+};
+
+struct xsc_cc_cmd_cnp_pcp {
+ u16 cmd;
+ u16 len;
+ u32 pcp;
+ u32 section;
+};
+
+struct xsc_cc_cmd_evt_period_alpha {
+ u16 cmd;
+ u16 len;
+ u32 period;
+};
+
+struct xsc_cc_cmd_clamp_tgt_rate {
+ u16 cmd;
+ u16 len;
+ u32 clamp_tgt_rate;
+ u32 section;
+};
+
+struct xsc_cc_cmd_max_hai_factor {
+ u16 cmd;
+ u16 len;
+ u32 max_hai_factor;
+ u32 section;
+};
+
+struct xsc_cc_cmd_scale {
+ u16 cmd;
+ u16 len;
+ u32 scale;
+ u32 section;
+};
+
+struct xsc_cc_cmd_get_cfg {
+ u16 cmd;
+ u16 len;
+ u32 enable_rp;
+ u32 enable_np;
+ u32 init_alpha;
+ u32 g;
+ u32 ai;
+ u32 hai;
+ u32 threshold;
+ u32 bytecount;
+ u32 opcode;
+ u32 bth_b;
+ u32 bth_f;
+ u32 cnp_ecn;
+ u32 data_ecn;
+ u32 cnp_tx_interval;
+ u32 evt_period_rsttime;
+ u32 cnp_dscp;
+ u32 cnp_pcp;
+ u32 evt_period_alpha;
+ u32 clamp_tgt_rate;
+ u32 max_hai_factor;
+ u32 scale;
+ u32 section;
+};
+
+struct xsc_cc_cmd_get_stat {
+ u16 cmd;
+ u16 len;
+ u32 section;
+};
+
+struct xsc_cc_cmd_stat {
+ u32 cnp_handled;
+ u32 alpha_recovery;
+ u32 reset_timeout;
+ u32 reset_bytecount;
+};
+
+struct xsc_set_mtu_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 mtu;
+ __be16 rx_buf_sz_min;
+ u8 mac_port;
+ u8 rsvd;
+};
+
+struct xsc_hwc_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 data[];
+};
+
+struct xsc_hwc_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 data[];
+};
+
+struct hwc_set_t {
+ u8 type;
+ u8 s_wqe_mode;
+ u8 r_wqe_mode;
+ u8 ack_timeout;
+ u8 group_mode;
+ u8 lossless_prio[XSC_MAX_MAC_NUM];
+ u8 lossless_prio_len;
+ u8 retry_cnt_th;
+ u8 adapt_to_other;
+ u8 alloc_qp_id_mode;
+ u16 vf_num_per_pf;
+ u16 max_vf_num_per_pf;
+ u8 eth_pkt_offset;
+ u8 rdma_pkt_offset;
+ u8 tso_eth_pkt_offset;
+ u8 tx_dedi_pref;
+ u8 reg_mr_via_cmdq;
+ u8 per_dst_grp_thr;
+ u8 per_dst_grp_cnt;
+ u8 dcbx_status[XSC_MAX_MAC_NUM];
+ u8 dcbx_port_cnt;
+};
+
+struct hwc_get_t {
+ u8 cur_s_wqe_mode;
+ u8 next_s_wqe_mode;
+ u8 cur_r_wqe_mode;
+ u8 next_r_wqe_mode;
+ u8 cur_ack_timeout;
+ u8 next_ack_timeout;
+ u8 cur_group_mode;
+ u8 next_group_mode;
+ u8 cur_lossless_prio[XSC_MAX_MAC_NUM];
+ u8 next_lossless_prio[XSC_MAX_MAC_NUM];
+ u8 lossless_prio_len;
+ u8 cur_retry_cnt_th;
+ u8 next_retry_cnt_th;
+ u8 cur_adapt_to_other;
+ u8 next_adapt_to_other;
+ u16 cur_vf_num_per_pf;
+ u16 next_vf_num_per_pf;
+ u16 cur_max_vf_num_per_pf;
+ u16 next_max_vf_num_per_pf;
+ u8 cur_eth_pkt_offset;
+ u8 next_eth_pkt_offset;
+ u8 cur_rdma_pkt_offset;
+ u8 next_rdma_pkt_offset;
+ u8 cur_tso_eth_pkt_offset;
+ u8 next_tso_eth_pkt_offset;
+ u8 cur_alloc_qp_id_mode;
+ u8 next_alloc_qp_id_mode;
+ u8 cur_tx_dedi_pref;
+ u8 next_tx_dedi_pref;
+ u8 cur_reg_mr_via_cmdq;
+ u8 next_reg_mr_via_cmdq;
+ u8 cur_per_dst_grp_thr;
+ u8 next_per_dst_grp_thr;
+ u8 cur_per_dst_grp_cnt;
+ u8 next_per_dst_grp_cnt;
+ u8 cur_dcbx_status[XSC_MAX_MAC_NUM];
+ u8 next_dcbx_status[XSC_MAX_MAC_NUM];
+ u8 dcbx_port_cnt;
+};
+
+struct xsc_set_mtu_mbox_out {
+ struct xsc_outbox_hdr hdr;
+};
+
+struct xsc_query_eth_mac_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 index;
+};
+
+struct xsc_query_eth_mac_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 mac[6];
+};
+
+struct xsc_query_pause_cnt_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u16 mac_port;
+ u16 cnt_type;
+ u32 reg_addr;
+};
+
+struct xsc_query_pause_cnt_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u64 val;
+};
+
+enum {
+ XSC_TBM_CAP_HASH_PPH = 0,
+ XSC_TBM_CAP_RSS,
+ XSC_TBM_CAP_PP_BYPASS,
+ XSC_TBM_CAP_PCT_DROP_CONFIG,
+};
+
+struct xsc_nic_attr {
+ __be16 caps;
+ __be16 caps_mask;
+ u8 mac_addr[6];
+};
+
+struct xsc_rss_attr {
+ u8 rss_en;
+ u8 hfunc;
+ __be16 rqn_base;
+ __be16 rqn_num;
+ __be32 hash_tmpl;
+};
+
+struct xsc_cmd_enable_nic_hca_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_nic_attr nic;
+ struct xsc_rss_attr rss;
+};
+
+struct xsc_cmd_enable_nic_hca_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd0[2];
+};
+
+struct xsc_nic_dis_attr {
+ __be16 caps;
+};
+
+struct xsc_cmd_disable_nic_hca_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_nic_dis_attr nic;
+};
+
+struct xsc_cmd_disable_nic_hca_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd0[4];
+};
+
+enum {
+ XSC_RSS_HASH_KEY_UPDATE = 0,
+ XSC_RSS_HASH_TEMP_UPDATE,
+ XSC_RSS_HASH_FUNC_UPDATE,
+ XSC_RSS_RXQ_UPDATE,
+ XSC_RSS_RXQ_DROP,
+};
+
+struct xsc_rss_modify_attr {
+ u8 caps_mask;
+ u8 rss_en;
+ __be16 rqn_base;
+ __be16 rqn_num;
+ u8 hfunc;
+ __be32 hash_tmpl;
+ u8 hash_key[52];
+};
+
+struct xsc_cmd_modify_nic_hca_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ struct xsc_nic_attr nic;
+ struct xsc_rss_modify_attr rss;
+};
+
+struct xsc_cmd_modify_nic_hca_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd0[4];
+};
+
+struct xsc_function_reset_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 glb_func_id;
+ u8 rsvd[6];
+};
+
+struct xsc_function_reset_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+enum {
+ XSC_PCIE_LAT_FEAT_SET_EN = 0,
+ XSC_PCIE_LAT_FEAT_GET_EN,
+ XSC_PCIE_LAT_FEAT_SET_INTERVAL,
+ XSC_PCIE_LAT_FEAT_GET_INTERVAL,
+ XSC_PCIE_LAT_FEAT_GET_HISTOGRAM,
+ XSC_PCIE_LAT_FEAT_GET_PEAK,
+ XSC_PCIE_LAT_FEAT_HW,
+ XSC_PCIE_LAT_FEAT_HW_INIT,
+};
+
+struct xsc_pcie_lat {
+ u8 pcie_lat_enable;
+ u32 pcie_lat_interval[XSC_PCIE_LAT_CFG_INTERVAL_MAX];
+ u32 pcie_lat_histogram[XSC_PCIE_LAT_CFG_HISTOGRAM_MAX];
+ u32 pcie_lat_peak;
+};
+
+struct xsc_pcie_lat_feat_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 xsc_pcie_lat_feature_opcode;
+ struct xsc_pcie_lat pcie_lat;
+};
+
+struct xsc_pcie_lat_feat_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be16 xsc_pcie_lat_feature_opcode;
+ struct xsc_pcie_lat pcie_lat;
+};
+
+struct xsc_reg_mcia {
+ u8 module;
+ u8 status;
+
+ u8 i2c_device_address;
+ u8 page_number;
+ u8 device_address;
+
+ u8 size;
+
+ u8 dword_0[0x20];
+ u8 dword_1[0x20];
+ u8 dword_2[0x20];
+ u8 dword_3[0x20];
+ u8 dword_4[0x20];
+ u8 dword_5[0x20];
+ u8 dword_6[0x20];
+ u8 dword_7[0x20];
+ u8 dword_8[0x20];
+ u8 dword_9[0x20];
+ u8 dword_10[0x20];
+ u8 dword_11[0x20];
+};
+
+struct xsc_rtt_en_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 en;//0-disable, 1-enable
+ u8 rsvd[7];
+};
+
+struct xsc_rtt_en_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 en;//0-disable, 1-enable
+ u8 rsvd[7];
+};
+
+struct xsc_rtt_qpn_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 qpn[32];
+};
+
+struct xsc_rtt_qpn_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_get_rtt_qpn_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 qpn[32];
+};
+
+struct xsc_rtt_period_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be32 period; //ms
+};
+
+struct xsc_rtt_period_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be32 period; //ms
+ u8 rsvd[4];
+};
+
+struct xsc_rtt_result_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be64 result[32];
+};
+
+struct rtt_stats {
+ u64 rtt_succ_snd_req_cnt;
+ u64 rtt_succ_snd_rsp_cnt;
+ u64 rtt_fail_snd_req_cnt;
+ u64 rtt_fail_snd_rsp_cnt;
+ u64 rtt_rcv_req_cnt;
+ u64 rtt_rcv_rsp_cnt;
+ u64 rtt_rcv_unk_cnt;
+ u64 rtt_grp_invalid_cnt;
+};
+
+struct xsc_rtt_stats_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ struct rtt_stats stats;
+};
+
+enum {
+ XSC_AP_FEAT_SET_UDP_SPORT = 0,
+};
+
+struct xsc_ap_feat_set_udp_sport {
+ u32 qpn;
+ u32 udp_sport;
+};
+
+struct xsc_ap {
+ struct xsc_ap_feat_set_udp_sport set_udp_sport;
+};
+
+struct xsc_ap_feat_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ __be16 xsc_ap_feature_opcode;
+ struct xsc_ap ap;
+};
+
+struct xsc_ap_feat_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be16 xsc_ap_feature_opcode;
+ struct xsc_ap ap;
+};
+
+struct xsc_set_debug_info_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 set_field;
+ u8 log_level;
+ u8 cmd_verbose;
+ u8 rsvd[5];
+};
+
+struct xsc_set_debug_info_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_cmd_enable_relaxed_order_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_cmd_enable_relaxed_order_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_cmd_query_guid_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_cmd_query_guid_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ __be64 guid;
+};
+
+struct xsc_cmd_activate_hw_config_mbox_in {
+ struct xsc_inbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+struct xsc_cmd_activate_hw_config_mbox_out {
+ struct xsc_outbox_hdr hdr;
+ u8 rsvd[8];
+};
+
+#endif /* XSC_CMD_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h
new file mode 100644
index 000000000..281a3e134
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h
@@ -0,0 +1,218 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_CMDQ_H
+#define XSC_CMDQ_H
+
+#include "common/xsc_cmd.h"
+
+enum {
+ /* one minute for the sake of bringup. Generally, commands must always
+ * complete and we may need to increase this timeout value
+ */
+ XSC_CMD_TIMEOUT_MSEC = 10 * 1000,
+ XSC_CMD_WQ_MAX_NAME = 32,
+};
+
+enum {
+ XSC_CMD_DATA, /* print command payload only */
+ XSC_CMD_TIME, /* print command execution time */
+};
+
+enum {
+ XSC_MAX_COMMANDS = 32,
+ XSC_CMD_DATA_BLOCK_SIZE = 512,
+ XSC_PCI_CMD_XPORT = 7,
+};
+
+struct xsc_cmd_prot_block {
+ u8 data[XSC_CMD_DATA_BLOCK_SIZE];
+ u8 rsvd0[48];
+ __be64 next;
+ __be32 block_num;
+ u8 owner_status; //init to 0, dma user should change this val to 1
+ u8 token;
+ u8 ctrl_sig;
+ u8 sig;
+};
+
+struct cache_ent {
+ /* protect block chain allocations
+ */
+ spinlock_t lock;
+ struct list_head head;
+};
+
+struct cmd_msg_cache {
+ struct cache_ent large;
+ struct cache_ent med;
+
+};
+
+#define CMD_FIRST_SIZE 8
+struct xsc_cmd_first {
+ __be32 data[CMD_FIRST_SIZE];
+};
+
+struct xsc_cmd_mailbox {
+ void *buf;
+ dma_addr_t dma;
+ struct xsc_cmd_mailbox *next;
+};
+
+struct xsc_cmd_msg {
+ struct list_head list;
+ struct cache_ent *cache;
+ u32 len;
+ struct xsc_cmd_first first;
+ struct xsc_cmd_mailbox *next;
+};
+
+#define RSP_FIRST_SIZE 14
+struct xsc_rsp_first {
+ __be32 data[RSP_FIRST_SIZE]; //can be larger, xsc_rsp_layout
+};
+
+struct xsc_rsp_msg {
+ struct list_head list;
+ struct cache_ent *cache;
+ u32 len;
+ struct xsc_rsp_first first;
+ struct xsc_cmd_mailbox *next;
+};
+
+typedef void (*xsc_cmd_cbk_t)(int status, void *context);
+
+//hw will use this for some records(e.g. vf_id)
+struct cmdq_rsv {
+ u16 vf_id;
+ u8 rsv[2];
+};
+
+//related with hw, won't change
+#define CMDQ_ENTRY_SIZE 64
+
+struct xsc_cmd_layout {
+ struct cmdq_rsv rsv0;
+ __be32 inlen;
+ __be64 in_ptr;
+ __be32 in[CMD_FIRST_SIZE];
+ __be64 out_ptr;
+ __be32 outlen;
+ u8 token;
+ u8 sig;
+ u8 idx;
+ u8 type: 7;
+ u8 owner_bit: 1; //rsv for hw, arm will check this bit to make sure mem written
+};
+
+struct xsc_rsp_layout {
+ struct cmdq_rsv rsv0;
+ __be32 out[RSP_FIRST_SIZE];
+ u8 token;
+ u8 sig;
+ u8 idx;
+ u8 type: 7;
+ u8 owner_bit: 1; //rsv for hw, driver will check this bit to make sure mem written
+};
+
+struct xsc_cmd_work_ent {
+ struct xsc_cmd_msg *in;
+ struct xsc_rsp_msg *out;
+ int idx;
+ struct completion done;
+ struct xsc_cmd *cmd;
+ struct work_struct work;
+ struct xsc_cmd_layout *lay;
+ struct xsc_rsp_layout *rsp_lay;
+ int ret;
+ u8 status;
+ u8 token;
+ struct timespec64 ts1;
+ struct timespec64 ts2;
+};
+
+struct xsc_cmd_debug {
+ struct dentry *dbg_root;
+ struct dentry *dbg_in;
+ struct dentry *dbg_out;
+ struct dentry *dbg_outlen;
+ struct dentry *dbg_status;
+ struct dentry *dbg_run;
+ void *in_msg;
+ void *out_msg;
+ u8 status;
+ u16 inlen;
+ u16 outlen;
+};
+
+struct xsc_cmd_stats {
+ u64 sum;
+ u64 n;
+ struct dentry *root;
+ struct dentry *avg;
+ struct dentry *count;
+ /* protect command average calculations */
+ spinlock_t lock;
+};
+
+struct xsc_cmd_reg {
+ u32 req_pid_addr;
+ u32 req_cid_addr;
+ u32 rsp_pid_addr;
+ u32 rsp_cid_addr;
+ u32 req_buf_h_addr;
+ u32 req_buf_l_addr;
+ u32 rsp_buf_h_addr;
+ u32 rsp_buf_l_addr;
+ u32 msix_vec_addr;
+ u32 element_sz_addr;
+ u32 q_depth_addr;
+ u32 interrupt_stat_addr;
+};
+
+enum xsc_cmd_status {
+ XSC_CMD_STATUS_NORMAL,
+ XSC_CMD_STATUS_TIMEDOUT,
+};
+
+struct xsc_cmd {
+ struct xsc_cmd_reg reg;
+ void *cmd_buf;
+ void *cq_buf;
+ dma_addr_t dma;
+ dma_addr_t cq_dma;
+ u16 cmd_pid;
+ u16 cq_cid;
+ u8 owner_bit;
+ u8 cmdif_rev;
+ u8 log_sz;
+ u8 log_stride;
+ int max_reg_cmds;
+ int events;
+ u32 __iomem *vector;
+
+ spinlock_t alloc_lock; /* protect command queue allocations */
+ spinlock_t token_lock; /* protect token allocations */
+ spinlock_t doorbell_lock; /* protect cmdq req pid doorbell */
+ u8 token;
+ unsigned long bitmask;
+ char wq_name[XSC_CMD_WQ_MAX_NAME];
+ struct workqueue_struct *wq;
+ struct task_struct *cq_task;
+ struct semaphore sem;
+ int mode;
+ struct xsc_cmd_work_ent *ent_arr[XSC_MAX_COMMANDS];
+ struct dma_pool *pool;
+ struct xsc_cmd_debug dbg;
+ struct cmd_msg_cache cache;
+ int checksum_disabled;
+ struct xsc_cmd_stats stats[XSC_CMD_OP_MAX];
+ unsigned int irqn;
+ u8 ownerbit_learned;
+ u8 cmd_status;
+};
+
+#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index 5ed12760e..61ae5eafc 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -8,7 +8,9 @@
#include <linux/kernel.h>
#include <linux/pci.h>
+#include "common/xsc_cmdq.h"
+extern uint xsc_debug_mask;
extern unsigned int xsc_log_level;
#define XSC_PCI_VENDOR_ID 0x1f67
@@ -93,6 +95,11 @@ do { \
} \
} while (0)
+#define REG_ADDR(dev, offset) \
+ (((dev)->bar) + ((offset) - 0xA0000000))
+
+#define REG_WIDTH_TO_STRIDE(width) ((width) / 8)
+
enum {
XSC_MAX_NAME_LEN = 32,
};
@@ -106,6 +113,11 @@ enum xsc_pci_state {
XSC_PCI_STATE_ENABLED,
};
+enum xsc_interface_state {
+ XSC_INTERFACE_STATE_UP = BIT(0),
+ XSC_INTERFACE_STATE_TEARDOWN = BIT(1),
+};
+
struct xsc_priv {
char name[XSC_MAX_NAME_LEN];
struct list_head dev_list;
@@ -123,6 +135,9 @@ struct xsc_core_device {
void __iomem *bar;
int bar_num;
+ struct xsc_cmd cmd;
+ u16 cmdq_ver;
+
struct mutex pci_state_mutex; /* protect pci_state */
enum xsc_pci_state pci_state;
struct mutex intf_state_mutex; /* protect intf_state */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h
new file mode 100644
index 000000000..636489fa3
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_DRIVER_H
+#define XSC_DRIVER_H
+
+#include "common/xsc_core.h"
+#include "common/xsc_cmd.h"
+
+int xsc_cmd_init(struct xsc_core_device *xdev);
+void xsc_cmd_cleanup(struct xsc_core_device *xdev);
+void xsc_cmd_use_events(struct xsc_core_device *xdev);
+void xsc_cmd_use_polling(struct xsc_core_device *xdev);
+int xsc_cmd_err_handler(struct xsc_core_device *xdev);
+void xsc_cmd_resp_handler(struct xsc_core_device *xdev);
+int xsc_cmd_status_to_err(struct xsc_outbox_hdr *hdr);
+int xsc_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out,
+ int out_size);
+int xsc_cmd_version_check(struct xsc_core_device *xdev);
+const char *xsc_command_str(int command);
+
+#endif
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
index 709270df8..5e0f0a205 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
@@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
-xsc_pci-y := main.o
+xsc_pci-y := main.o cmdq.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c
new file mode 100644
index 000000000..58b612364
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c
@@ -0,0 +1,2000 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/*
+ * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ * Copyright (c) 2013-2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifdef HAVE_GENERIC_KMAP_TYPE
+#include <asm-generic/kmap_types.h>
+#endif
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/random.h>
+#include <linux/kthread.h>
+#include <linux/io-mapping.h>
+#include "common/xsc_driver.h"
+#include "common/xsc_cmd.h"
+#include "common/xsc_auto_hw.h"
+#include "common/xsc_core.h"
+
+enum {
+ CMD_IF_REV = 3,
+};
+
+enum {
+ CMD_MODE_POLLING,
+ CMD_MODE_EVENTS
+};
+
+enum {
+ NUM_LONG_LISTS = 2,
+ NUM_MED_LISTS = 64,
+ LONG_LIST_SIZE = (2ULL * 1024 * 1024 * 1024 / PAGE_SIZE) * 8 + 16 +
+ XSC_CMD_DATA_BLOCK_SIZE,
+ MED_LIST_SIZE = 16 + XSC_CMD_DATA_BLOCK_SIZE,
+};
+
+enum {
+ XSC_CMD_DELIVERY_STAT_OK = 0x0,
+ XSC_CMD_DELIVERY_STAT_SIGNAT_ERR = 0x1,
+ XSC_CMD_DELIVERY_STAT_TOK_ERR = 0x2,
+ XSC_CMD_DELIVERY_STAT_BAD_BLK_NUM_ERR = 0x3,
+ XSC_CMD_DELIVERY_STAT_OUT_PTR_ALIGN_ERR = 0x4,
+ XSC_CMD_DELIVERY_STAT_IN_PTR_ALIGN_ERR = 0x5,
+ XSC_CMD_DELIVERY_STAT_FW_ERR = 0x6,
+ XSC_CMD_DELIVERY_STAT_IN_LENGTH_ERR = 0x7,
+ XSC_CMD_DELIVERY_STAT_OUT_LENGTH_ERR = 0x8,
+ XSC_CMD_DELIVERY_STAT_RES_FLD_NOT_CLR_ERR = 0x9,
+ XSC_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10,
+};
+
+static struct xsc_cmd_work_ent *alloc_cmd(struct xsc_cmd *cmd,
+ struct xsc_cmd_msg *in,
+ struct xsc_rsp_msg *out)
+{
+ struct xsc_cmd_work_ent *ent;
+
+ ent = kzalloc(sizeof(*ent), GFP_KERNEL);
+ if (!ent)
+ return ERR_PTR(-ENOMEM);
+
+ ent->in = in;
+ ent->out = out;
+ ent->cmd = cmd;
+
+ return ent;
+}
+
+static u8 alloc_token(struct xsc_cmd *cmd)
+{
+ u8 token;
+
+ spin_lock(&cmd->token_lock);
+ token = cmd->token++ % 255 + 1;
+ spin_unlock(&cmd->token_lock);
+
+ return token;
+}
+
+static int alloc_ent(struct xsc_cmd *cmd)
+{
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&cmd->alloc_lock, flags);
+ ret = find_first_bit(&cmd->bitmask, cmd->max_reg_cmds);
+ if (ret < cmd->max_reg_cmds)
+ clear_bit(ret, &cmd->bitmask);
+ spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+
+ return ret < cmd->max_reg_cmds ? ret : -ENOMEM;
+}
+
+static void free_ent(struct xsc_cmd *cmd, int idx)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&cmd->alloc_lock, flags);
+ set_bit(idx, &cmd->bitmask);
+ spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+}
+
+static struct xsc_cmd_layout *get_inst(struct xsc_cmd *cmd, int idx)
+{
+ return cmd->cmd_buf + (idx << cmd->log_stride);
+}
+
+static struct xsc_rsp_layout *get_cq_inst(struct xsc_cmd *cmd, int idx)
+{
+ return cmd->cq_buf + (idx << cmd->log_stride);
+}
+
+static u8 xor8_buf(void *buf, int len)
+{
+ u8 *ptr = buf;
+ u8 sum = 0;
+ int i;
+
+ for (i = 0; i < len; i++)
+ sum ^= ptr[i];
+
+ return sum;
+}
+
+static int verify_block_sig(struct xsc_cmd_prot_block *block)
+{
+ if (xor8_buf(block->rsvd0, sizeof(*block) - sizeof(block->data) - 1) != 0xff)
+ return -EINVAL;
+
+ if (xor8_buf(block, sizeof(*block)) != 0xff)
+ return -EINVAL;
+
+ return 0;
+}
+
+static void calc_block_sig(struct xsc_cmd_prot_block *block, u8 token)
+{
+ block->token = token;
+ block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) - sizeof(block->data) - 2);
+ block->sig = ~xor8_buf(block, sizeof(*block) - 1);
+}
+
+static void calc_chain_sig(struct xsc_cmd_mailbox *head, u8 token)
+{
+ struct xsc_cmd_mailbox *next = head;
+
+ while (next) {
+ calc_block_sig(next->buf, token);
+ next = next->next;
+ }
+}
+
+static void set_signature(struct xsc_cmd_work_ent *ent)
+{
+ ent->lay->sig = ~xor8_buf(ent->lay, sizeof(*ent->lay));
+ calc_chain_sig(ent->in->next, ent->token);
+ calc_chain_sig(ent->out->next, ent->token);
+}
+
+static void free_cmd(struct xsc_cmd_work_ent *ent)
+{
+ kfree(ent);
+}
+
+static int verify_signature(struct xsc_cmd_work_ent *ent)
+{
+ struct xsc_cmd_mailbox *next = ent->out->next;
+ int err;
+ u8 sig;
+
+ sig = xor8_buf(ent->rsp_lay, sizeof(*ent->rsp_lay));
+ if (sig != 0xff)
+ return -EINVAL;
+
+ while (next) {
+ err = verify_block_sig(next->buf);
+ if (err)
+ return err;
+
+ next = next->next;
+ }
+
+ return 0;
+}
+
+static void dump_buf(void *buf, int size, int offset)
+{
+ __be32 *p = buf;
+ int i;
+
+ for (i = 0; i < size; i += 16) {
+ xsc_pr_debug("%03x: %08x %08x %08x %08x\n", offset, be32_to_cpu(p[0]),
+ be32_to_cpu(p[1]), be32_to_cpu(p[2]), be32_to_cpu(p[3]));
+ p += 4;
+ offset += 16;
+ }
+ xsc_pr_debug("\n");
+}
+
+const char *xsc_command_str(int command)
+{
+ switch (command) {
+ case XSC_CMD_OP_QUERY_HCA_CAP:
+ return "QUERY_HCA_CAP";
+
+ case XSC_CMD_OP_ENABLE_HCA:
+ return "ENABLE_HCA";
+
+ case XSC_CMD_OP_DISABLE_HCA:
+ return "DISABLE_HCA";
+
+ case XSC_CMD_OP_MODIFY_HCA:
+ return "MODIFY_HCA";
+
+ case XSC_CMD_OP_QUERY_CMDQ_VERSION:
+ return "QUERY_CMDQ_VERSION";
+
+ case XSC_CMD_OP_QUERY_MSIX_TBL_INFO:
+ return "QUERY_MSIX_TBL_INFO";
+
+ case XSC_CMD_OP_FUNCTION_RESET:
+ return "FUNCTION_RESET";
+
+ case XSC_CMD_OP_ALLOC_IA_LOCK:
+ return "ALLOC_IA_LOCK";
+
+ case XSC_CMD_OP_RELEASE_IA_LOCK:
+ return "RELEASE_IA_LOCK";
+
+ case XSC_CMD_OP_DUMMY:
+ return "DUMMY_CMD";
+
+ case XSC_CMD_OP_SET_DEBUG_INFO:
+ return "SET_DEBUG_INFO";
+
+ case XSC_CMD_OP_CREATE_MKEY:
+ return "CREATE_MKEY";
+
+ case XSC_CMD_OP_QUERY_MKEY:
+ return "QUERY_MKEY";
+
+ case XSC_CMD_OP_DESTROY_MKEY:
+ return "DESTROY_MKEY";
+
+ case XSC_CMD_OP_QUERY_SPECIAL_CONTEXTS:
+ return "QUERY_SPECIAL_CONTEXTS";
+
+ case XSC_CMD_OP_SET_MPT:
+ return "SET_MPT";
+
+ case XSC_CMD_OP_SET_MTT:
+ return "SET_MTT";
+
+ case XSC_CMD_OP_CREATE_EQ:
+ return "CREATE_EQ";
+
+ case XSC_CMD_OP_DESTROY_EQ:
+ return "DESTROY_EQ";
+
+ case XSC_CMD_OP_QUERY_EQ:
+ return "QUERY_EQ";
+
+ case XSC_CMD_OP_CREATE_CQ:
+ return "CREATE_CQ";
+
+ case XSC_CMD_OP_DESTROY_CQ:
+ return "DESTROY_CQ";
+
+ case XSC_CMD_OP_QUERY_CQ:
+ return "QUERY_CQ";
+
+ case XSC_CMD_OP_MODIFY_CQ:
+ return "MODIFY_CQ";
+
+ case XSC_CMD_OP_CREATE_QP:
+ return "CREATE_QP";
+
+ case XSC_CMD_OP_DESTROY_QP:
+ return "DESTROY_QP";
+
+ case XSC_CMD_OP_RST2INIT_QP:
+ return "RST2INIT_QP";
+
+ case XSC_CMD_OP_INIT2RTR_QP:
+ return "INIT2RTR_QP";
+
+ case XSC_CMD_OP_RTR2RTS_QP:
+ return "RTR2RTS_QP";
+
+ case XSC_CMD_OP_RTS2RTS_QP:
+ return "RTS2RTS_QP";
+
+ case XSC_CMD_OP_SQERR2RTS_QP:
+ return "SQERR2RTS_QP";
+
+ case XSC_CMD_OP_2ERR_QP:
+ return "2ERR_QP";
+
+ case XSC_CMD_OP_RTS2SQD_QP:
+ return "RTS2SQD_QP";
+
+ case XSC_CMD_OP_SQD2RTS_QP:
+ return "SQD2RTS_QP";
+
+ case XSC_CMD_OP_2RST_QP:
+ return "2RST_QP";
+
+ case XSC_CMD_OP_QUERY_QP:
+ return "QUERY_QP";
+
+ case XSC_CMD_OP_CONF_SQP:
+ return "CONF_SQP";
+
+ case XSC_CMD_OP_MAD_IFC:
+ return "MAD_IFC";
+
+ case XSC_CMD_OP_INIT2INIT_QP:
+ return "INIT2INIT_QP";
+
+ case XSC_CMD_OP_SQD2SQD_QP:
+ return "SQD2SQD_QP";
+
+ case XSC_CMD_OP_QUERY_QP_FLUSH_STATUS:
+ return "QUERY_QP_FLUSH_STATUS";
+
+ case XSC_CMD_OP_ALLOC_PD:
+ return "ALLOC_PD";
+
+ case XSC_CMD_OP_DEALLOC_PD:
+ return "DEALLOC_PD";
+
+ case XSC_CMD_OP_ACCESS_REG:
+ return "ACCESS_REG";
+
+ case XSC_CMD_OP_MODIFY_RAW_QP:
+ return "MODIFY_RAW_QP";
+
+ case XSC_CMD_OP_ENABLE_NIC_HCA:
+ return "ENABLE_NIC_HCA";
+
+ case XSC_CMD_OP_DISABLE_NIC_HCA:
+ return "DISABLE_NIC_HCA";
+
+ case XSC_CMD_OP_MODIFY_NIC_HCA:
+ return "MODIFY_NIC_HCA";
+
+ case XSC_CMD_OP_QUERY_NIC_VPORT_CONTEXT:
+ return "QUERY_NIC_VPORT_CONTEXT";
+
+ case XSC_CMD_OP_MODIFY_NIC_VPORT_CONTEXT:
+ return "MODIFY_NIC_VPORT_CONTEXT";
+
+ case XSC_CMD_OP_QUERY_VPORT_STATE:
+ return "QUERY_VPORT_STATE";
+
+ case XSC_CMD_OP_MODIFY_VPORT_STATE:
+ return "MODIFY_VPORT_STATE";
+
+ case XSC_CMD_OP_QUERY_HCA_VPORT_CONTEXT:
+ return "QUERY_HCA_VPORT_CONTEXT";
+
+ case XSC_CMD_OP_MODIFY_HCA_VPORT_CONTEXT:
+ return "MODIFY_HCA_VPORT_CONTEXT";
+
+ case XSC_CMD_OP_QUERY_HCA_VPORT_GID:
+ return "QUERY_HCA_VPORT_GID";
+
+ case XSC_CMD_OP_QUERY_HCA_VPORT_PKEY:
+ return "QUERY_HCA_VPORT_PKEY";
+
+ case XSC_CMD_OP_QUERY_VPORT_COUNTER:
+ return "QUERY_VPORT_COUNTER";
+
+ case XSC_CMD_OP_QUERY_PRIO_STATS:
+ return "QUERY_PRIO_STATS";
+
+ case XSC_CMD_OP_QUERY_PHYPORT_STATE:
+ return "QUERY_PHYPORT_STATE";
+
+ case XSC_CMD_OP_QUERY_EVENT_TYPE:
+ return "QUERY_EVENT_TYPE";
+
+ case XSC_CMD_OP_QUERY_LINK_INFO:
+ return "QUERY_LINK_INFO";
+
+ case XSC_CMD_OP_MODIFY_LINK_INFO:
+ return "MODIFY_LINK_INFO";
+
+ case XSC_CMD_OP_MODIFY_FEC_PARAM:
+ return "MODIFY_FEC_PARAM";
+
+ case XSC_CMD_OP_QUERY_FEC_PARAM:
+ return "QUERY_FEC_PARAM";
+
+ case XSC_CMD_OP_LAG_CREATE:
+ return "LAG_CREATE";
+
+ case XSC_CMD_OP_LAG_ADD_MEMBER:
+ return "LAG ADD MEMBER";
+
+ case XSC_CMD_OP_LAG_REMOVE_MEMBER:
+ return "LAG REMOVE MEMBER";
+
+ case XSC_CMD_OP_LAG_UPDATE_MEMBER_STATUS:
+ return "LAG UPDATE MEMBER STATUS";
+
+ case XSC_CMD_OP_LAG_UPDATE_HASH_TYPE:
+ return "LAG UPDATE HASH TYPE";
+
+ case XSC_CMD_OP_LAG_DESTROY:
+ return "LAG_DESTROY";
+
+ case XSC_CMD_OP_LAG_SET_QOS:
+ return "LAG_SET_QOS";
+
+ case XSC_CMD_OP_ENABLE_MSIX:
+ return "ENABLE_MSIX";
+
+ case XSC_CMD_OP_IOCTL_FLOW:
+ return "CFG_FLOW_TABLE";
+
+ case XSC_CMD_OP_IOCTL_SET_DSCP_PMT:
+ return "SET_DSCP_PMT";
+
+ case XSC_CMD_OP_IOCTL_GET_DSCP_PMT:
+ return "GET_DSCP_PMT";
+
+ case XSC_CMD_OP_IOCTL_SET_TRUST_MODE:
+ return "SET_TRUST_MODE";
+
+ case XSC_CMD_OP_IOCTL_GET_TRUST_MODE:
+ return "GET_TRUST_MODE";
+
+ case XSC_CMD_OP_IOCTL_SET_PCP_PMT:
+ return "SET_PCP_PMT";
+
+ case XSC_CMD_OP_IOCTL_GET_PCP_PMT:
+ return "GET_PCP_PMT";
+
+ case XSC_CMD_OP_IOCTL_SET_DEFAULT_PRI:
+ return "SET_DEFAULT_PRI";
+
+ case XSC_CMD_OP_IOCTL_GET_DEFAULT_PRI:
+ return "GET_DEFAULT_PRI";
+
+ case XSC_CMD_OP_IOCTL_SET_PFC:
+ return "SET_PFC";
+
+ case XSC_CMD_OP_IOCTL_SET_PFC_DROP_TH:
+ return "SET_PFC_DROP_TH";
+
+ case XSC_CMD_OP_IOCTL_GET_PFC:
+ return "GET_PFC";
+
+ case XSC_CMD_OP_IOCTL_GET_PFC_CFG_STATUS:
+ return "GET_PFC_CFG_STATUS";
+
+ case XSC_CMD_OP_IOCTL_SET_RATE_LIMIT:
+ return "SET_RATE_LIMIT";
+
+ case XSC_CMD_OP_IOCTL_GET_RATE_LIMIT:
+ return "GET_RATE_LIMIT";
+
+ case XSC_CMD_OP_IOCTL_SET_SP:
+ return "SET_SP";
+
+ case XSC_CMD_OP_IOCTL_GET_SP:
+ return "GET_SP";
+
+ case XSC_CMD_OP_IOCTL_SET_WEIGHT:
+ return "SET_WEIGHT";
+
+ case XSC_CMD_OP_IOCTL_GET_WEIGHT:
+ return "GET_WEIGHT";
+
+ case XSC_CMD_OP_IOCTL_DPU_SET_PORT_WEIGHT:
+ return "DPU_SET_PORT_WEIGHT";
+
+ case XSC_CMD_OP_IOCTL_DPU_GET_PORT_WEIGHT:
+ return "DPU_GET_PORT_WEIGHT";
+
+ case XSC_CMD_OP_IOCTL_DPU_SET_PRIO_WEIGHT:
+ return "DPU_SET_PRIO_WEIGHT";
+
+ case XSC_CMD_OP_IOCTL_DPU_GET_PRIO_WEIGHT:
+ return "DPU_GET_PRIO_WEIGHT";
+
+ case XSC_CMD_OP_IOCTL_SET_WATCHDOG_EN:
+ return "SET_WATCHDOG_EN";
+
+ case XSC_CMD_OP_IOCTL_GET_WATCHDOG_EN:
+ return "GET_WATCHDOG_EN";
+
+ case XSC_CMD_OP_IOCTL_SET_WATCHDOG_PERIOD:
+ return "SET_WATCHDOG_PERIOD";
+
+ case XSC_CMD_OP_IOCTL_GET_WATCHDOG_PERIOD:
+ return "GET_WATCHDOG_PERIOD";
+
+ case XSC_CMD_OP_IOCTL_SET_ENABLE_RP:
+ return "ENABLE_RP";
+
+ case XSC_CMD_OP_IOCTL_SET_ENABLE_NP:
+ return "ENABLE_NP";
+
+ case XSC_CMD_OP_IOCTL_SET_INIT_ALPHA:
+ return "SET_INIT_ALPHA";
+
+ case XSC_CMD_OP_IOCTL_SET_G:
+ return "SET_G";
+
+ case XSC_CMD_OP_IOCTL_SET_AI:
+ return "SET_AI";
+
+ case XSC_CMD_OP_IOCTL_SET_HAI:
+ return "SET_HAI";
+
+ case XSC_CMD_OP_IOCTL_SET_TH:
+ return "SET_TH";
+
+ case XSC_CMD_OP_IOCTL_SET_BC_TH:
+ return "SET_BC_TH";
+
+ case XSC_CMD_OP_IOCTL_SET_CNP_OPCODE:
+ return "SET_CNP_OPCODE";
+
+ case XSC_CMD_OP_IOCTL_SET_CNP_BTH_B:
+ return "SET_CNP_BTH_B";
+
+ case XSC_CMD_OP_IOCTL_SET_CNP_BTH_F:
+ return "SET_CNP_BTH_F";
+
+ case XSC_CMD_OP_IOCTL_SET_CNP_ECN:
+ return "SET_CNP_ECN";
+
+ case XSC_CMD_OP_IOCTL_SET_DATA_ECN:
+ return "SET_DATA_ECN";
+
+ case XSC_CMD_OP_IOCTL_SET_CNP_TX_INTERVAL:
+ return "SET_CNP_TX_INTERVAL";
+
+ case XSC_CMD_OP_IOCTL_SET_EVT_PERIOD_RSTTIME:
+ return "SET_EVT_PERIOD_RSTTIME";
+
+ case XSC_CMD_OP_IOCTL_SET_CNP_DSCP:
+ return "SET_CNP_DSCP";
+
+ case XSC_CMD_OP_IOCTL_SET_CNP_PCP:
+ return "SET_CNP_PCP";
+
+ case XSC_CMD_OP_IOCTL_SET_EVT_PERIOD_ALPHA:
+ return "SET_EVT_PERIOD_ALPHA";
+
+ case XSC_CMD_OP_IOCTL_GET_CC_CFG:
+ return "GET_CC_CFG";
+
+ case XSC_CMD_OP_IOCTL_GET_CC_STAT:
+ return "GET_CC_STAT";
+
+ case XSC_CMD_OP_IOCTL_SET_CLAMP_TGT_RATE:
+ return "SET_CLAMP_TGT_RATE";
+
+ case XSC_CMD_OP_IOCTL_SET_MAX_HAI_FACTOR:
+ return "SET_MAX_HAI_FACTOR";
+
+ case XSC_CMD_OP_IOCTL_SET_HWC:
+ return "SET_HWCONFIG";
+
+ case XSC_CMD_OP_IOCTL_GET_HWC:
+ return "GET_HWCONFIG";
+
+ case XSC_CMD_OP_SET_MTU:
+ return "SET_MTU";
+
+ case XSC_CMD_OP_QUERY_ETH_MAC:
+ return "QUERY_ETH_MAC";
+
+ case XSC_CMD_OP_QUERY_HW_STATS:
+ return "QUERY_HW_STATS";
+
+ case XSC_CMD_OP_QUERY_PAUSE_CNT:
+ return "QUERY_PAUSE_CNT";
+
+ case XSC_CMD_OP_SET_RTT_EN:
+ return "SET_RTT_EN";
+
+ case XSC_CMD_OP_GET_RTT_EN:
+ return "GET_RTT_EN";
+
+ case XSC_CMD_OP_SET_RTT_QPN:
+ return "SET_RTT_QPN";
+
+ case XSC_CMD_OP_GET_RTT_QPN:
+ return "GET_RTT_QPN";
+
+ case XSC_CMD_OP_SET_RTT_PERIOD:
+ return "SET_RTT_PERIOD";
+
+ case XSC_CMD_OP_GET_RTT_PERIOD:
+ return "GET_RTT_PERIOD";
+
+ case XSC_CMD_OP_GET_RTT_RESULT:
+ return "GET_RTT_RESULT";
+
+ case XSC_CMD_OP_GET_RTT_STATS:
+ return "ET_RTT_STATS";
+
+ case XSC_CMD_OP_SET_LED_STATUS:
+ return "SET_LED_STATUS";
+
+ case XSC_CMD_OP_AP_FEAT:
+ return "AP_FEAT";
+
+ case XSC_CMD_OP_PCIE_LAT_FEAT:
+ return "PCIE_LAT_FEAT";
+
+ case XSC_CMD_OP_USER_EMU_CMD:
+ return "USER_EMU_CMD";
+
+ case XSC_CMD_OP_QUERY_PFC_PRIO_STATS:
+ return "QUERY_PFC_PRIO_STATS";
+
+ case XSC_CMD_OP_IOCTL_QUERY_PFC_STALL_STATS:
+ return "QUERY_PFC_STALL_STATS";
+
+ case XSC_CMD_OP_QUERY_HW_STATS_RDMA:
+ return "QUERY_HW_STATS_RDMA";
+
+ case XSC_CMD_OP_QUERY_HW_STATS_ETH:
+ return "QUERY_HW_STATS_ETH";
+
+ case XSC_CMD_OP_SET_VPORT_RATE_LIMIT:
+ return "SET_VPORT_RATE_LIMIT";
+
+ default: return "unknown command opcode";
+ }
+}
+
+static void dump_command(struct xsc_core_device *xdev, struct xsc_cmd_mailbox *next,
+ struct xsc_cmd_work_ent *ent, int input, int len)
+{
+ u16 op = be16_to_cpu(((struct xsc_inbox_hdr *)(ent->lay->in))->opcode);
+ int offset = 0;
+
+ if (!(xsc_debug_mask & (1 << XSC_CMD_DATA)))
+ return;
+
+ xsc_core_dbg(xdev, "dump command %s(0x%x) %s\n", xsc_command_str(op), op,
+ input ? "INPUT" : "OUTPUT");
+
+ if (input) {
+ dump_buf(ent->lay, sizeof(*ent->lay), offset);
+ offset += sizeof(*ent->lay);
+ } else {
+ dump_buf(ent->rsp_lay, sizeof(*ent->rsp_lay), offset);
+ offset += sizeof(*ent->rsp_lay);
+ }
+
+ while (next && offset < len) {
+ xsc_core_dbg(xdev, "command block:\n");
+ dump_buf(next->buf, sizeof(struct xsc_cmd_prot_block), offset);
+ offset += sizeof(struct xsc_cmd_prot_block);
+ next = next->next;
+ }
+}
+
+static void cmd_work_handler(struct work_struct *work)
+{
+ struct xsc_cmd_work_ent *ent = container_of(work, struct xsc_cmd_work_ent, work);
+ struct xsc_cmd *cmd = ent->cmd;
+ struct xsc_core_device *xdev = container_of(cmd, struct xsc_core_device, cmd);
+ struct xsc_cmd_layout *lay;
+ struct semaphore *sem;
+ unsigned long flags;
+
+ sem = &cmd->sem;
+ down(sem);
+ ent->idx = alloc_ent(cmd);
+ if (ent->idx < 0) {
+ xsc_core_err(xdev, "failed to allocate command entry\n");
+ up(sem);
+ return;
+ }
+
+ ent->token = alloc_token(cmd);
+ cmd->ent_arr[ent->idx] = ent;
+
+ spin_lock_irqsave(&cmd->doorbell_lock, flags);
+ lay = get_inst(cmd, cmd->cmd_pid);
+ ent->lay = lay;
+ memset(lay, 0, sizeof(*lay));
+ memcpy(lay->in, ent->in->first.data, sizeof(lay->in));
+ if (ent->in->next)
+ lay->in_ptr = cpu_to_be64(ent->in->next->dma);
+ lay->inlen = cpu_to_be32(ent->in->len);
+ if (ent->out->next)
+ lay->out_ptr = cpu_to_be64(ent->out->next->dma);
+ lay->outlen = cpu_to_be32(ent->out->len);
+ lay->type = XSC_PCI_CMD_XPORT;
+ lay->token = ent->token;
+ lay->idx = ent->idx;
+ if (!cmd->checksum_disabled)
+ set_signature(ent);
+ else
+ lay->sig = 0xff;
+ dump_command(xdev, ent->in->next, ent, 1, ent->in->len);
+
+ ktime_get_ts64(&ent->ts1);
+
+ /* ring doorbell after the descriptor is valid */
+ wmb();
+
+ cmd->cmd_pid = (cmd->cmd_pid + 1) % (1 << cmd->log_sz);
+ writel(cmd->cmd_pid, REG_ADDR(xdev, cmd->reg.req_pid_addr));
+ spin_unlock_irqrestore(&cmd->doorbell_lock, flags);
+
+#ifdef XSC_DEBUG
+ xsc_core_dbg(xdev, "write 0x%x to command doorbell, idx %u\n", cmd->cmd_pid, ent->idx);
+#endif
+}
+
+static const char *deliv_status_to_str(u8 status)
+{
+ switch (status) {
+ case XSC_CMD_DELIVERY_STAT_OK:
+ return "no errors";
+ case XSC_CMD_DELIVERY_STAT_SIGNAT_ERR:
+ return "signature error";
+ case XSC_CMD_DELIVERY_STAT_TOK_ERR:
+ return "token error";
+ case XSC_CMD_DELIVERY_STAT_BAD_BLK_NUM_ERR:
+ return "bad block number";
+ case XSC_CMD_DELIVERY_STAT_OUT_PTR_ALIGN_ERR:
+ return "output pointer not aligned to block size";
+ case XSC_CMD_DELIVERY_STAT_IN_PTR_ALIGN_ERR:
+ return "input pointer not aligned to block size";
+ case XSC_CMD_DELIVERY_STAT_FW_ERR:
+ return "firmware internal error";
+ case XSC_CMD_DELIVERY_STAT_IN_LENGTH_ERR:
+ return "command input length error";
+ case XSC_CMD_DELIVERY_STAT_OUT_LENGTH_ERR:
+ return "command output length error";
+ case XSC_CMD_DELIVERY_STAT_RES_FLD_NOT_CLR_ERR:
+ return "reserved fields not cleared";
+ case XSC_CMD_DELIVERY_STAT_CMD_DESCR_ERR:
+ return "bad command descriptor type";
+ default:
+ return "unknown status code";
+ }
+}
+
+static u16 msg_to_opcode(struct xsc_cmd_msg *in)
+{
+ struct xsc_inbox_hdr *hdr = (struct xsc_inbox_hdr *)(in->first.data);
+
+ return be16_to_cpu(hdr->opcode);
+}
+
+static int wait_func(struct xsc_core_device *xdev, struct xsc_cmd_work_ent *ent)
+{
+ unsigned long timeout = msecs_to_jiffies(XSC_CMD_TIMEOUT_MSEC);
+ int err;
+ struct xsc_cmd *cmd = &xdev->cmd;
+
+ if (!wait_for_completion_timeout(&ent->done, timeout))
+ err = -ETIMEDOUT;
+ else
+ err = ent->ret;
+
+ if (err == -ETIMEDOUT) {
+ cmd->cmd_status = XSC_CMD_STATUS_TIMEDOUT;
+ xsc_core_warn(xdev, "wait for %s(0x%x) response timeout!\n",
+ xsc_command_str(msg_to_opcode(ent->in)),
+ msg_to_opcode(ent->in));
+ } else if (err) {
+ xsc_core_dbg(xdev, "err %d, delivery status %s(%d)\n", err,
+ deliv_status_to_str(ent->status), ent->status);
+ }
+
+ return err;
+}
+
+/* Notes:
+ * 1. Callback functions may not sleep
+ * 2. page queue commands do not support asynchrous completion
+ */
+static int xsc_cmd_invoke(struct xsc_core_device *xdev, struct xsc_cmd_msg *in,
+ struct xsc_rsp_msg *out, u8 *status)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ struct xsc_cmd_work_ent *ent;
+ ktime_t t1, t2, delta;
+ struct xsc_cmd_stats *stats;
+ int err = 0;
+ s64 ds;
+ u16 op;
+ struct semaphore *sem;
+
+ ent = alloc_cmd(cmd, in, out);
+ if (IS_ERR(ent))
+ return PTR_ERR(ent);
+
+ init_completion(&ent->done);
+ INIT_WORK(&ent->work, cmd_work_handler);
+ if (!queue_work(cmd->wq, &ent->work)) {
+ xsc_core_warn(xdev, "failed to queue work\n");
+ err = -ENOMEM;
+ goto out_free;
+ }
+
+ err = wait_func(xdev, ent);
+ if (err == -ETIMEDOUT)
+ goto out;
+ t1 = timespec64_to_ktime(ent->ts1);
+ t2 = timespec64_to_ktime(ent->ts2);
+ delta = ktime_sub(t2, t1);
+ ds = ktime_to_ns(delta);
+ op = be16_to_cpu(((struct xsc_inbox_hdr *)in->first.data)->opcode);
+ if (op < ARRAY_SIZE(cmd->stats)) {
+ stats = &cmd->stats[op];
+ spin_lock(&stats->lock);
+ stats->sum += ds;
+ ++stats->n;
+ spin_unlock(&stats->lock);
+ }
+ xsc_core_dbg_mask(xdev, 1 << XSC_CMD_TIME,
+ "fw exec time for %s is %lld nsec\n",
+ xsc_command_str(op), ds);
+ *status = ent->status;
+ free_cmd(ent);
+
+ return err;
+
+out:
+ sem = &cmd->sem;
+ up(sem);
+out_free:
+ free_cmd(ent);
+ return err;
+}
+
+static int xsc_copy_to_cmd_msg(struct xsc_cmd_msg *to, void *from, int size)
+{
+ struct xsc_cmd_prot_block *block;
+ struct xsc_cmd_mailbox *next;
+ int copy;
+
+ if (!to || !from)
+ return -ENOMEM;
+
+ copy = min_t(int, size, sizeof(to->first.data));
+ memcpy(to->first.data, from, copy);
+ size -= copy;
+ from += copy;
+
+ next = to->next;
+ while (size) {
+ if (!next) {
+ /* this is a BUG */
+ return -ENOMEM;
+ }
+
+ copy = min_t(int, size, XSC_CMD_DATA_BLOCK_SIZE);
+ block = next->buf;
+ memcpy(block->data, from, copy);
+ block->owner_status = 0;
+ from += copy;
+ size -= copy;
+ next = next->next;
+ }
+
+ return 0;
+}
+
+static int xsc_copy_from_rsp_msg(void *to, struct xsc_rsp_msg *from, int size)
+{
+ struct xsc_cmd_prot_block *block;
+ struct xsc_cmd_mailbox *next;
+ int copy;
+
+ if (!to || !from)
+ return -ENOMEM;
+
+ copy = min_t(int, size, sizeof(from->first.data));
+ memcpy(to, from->first.data, copy);
+ size -= copy;
+ to += copy;
+
+ next = from->next;
+ while (size) {
+ if (!next) {
+ /* this is a BUG */
+ return -ENOMEM;
+ }
+
+ copy = min_t(int, size, XSC_CMD_DATA_BLOCK_SIZE);
+ block = next->buf;
+ if (!block->owner_status)
+ pr_err("block ownership check failed\n");
+
+ memcpy(to, block->data, copy);
+ to += copy;
+ size -= copy;
+ next = next->next;
+ }
+
+ return 0;
+}
+
+static struct xsc_cmd_mailbox *alloc_cmd_box(struct xsc_core_device *xdev,
+ gfp_t flags)
+{
+ struct xsc_cmd_mailbox *mailbox;
+
+ mailbox = kmalloc(sizeof(*mailbox), flags);
+ if (!mailbox)
+ return ERR_PTR(-ENOMEM);
+
+ mailbox->buf = dma_pool_alloc(xdev->cmd.pool, flags,
+ &mailbox->dma);
+ if (!mailbox->buf) {
+ xsc_core_dbg(xdev, "failed allocation\n");
+ kfree(mailbox);
+ return ERR_PTR(-ENOMEM);
+ }
+ memset(mailbox->buf, 0, sizeof(struct xsc_cmd_prot_block));
+ mailbox->next = NULL;
+
+ return mailbox;
+}
+
+static void free_cmd_box(struct xsc_core_device *xdev,
+ struct xsc_cmd_mailbox *mailbox)
+{
+ dma_pool_free(xdev->cmd.pool, mailbox->buf, mailbox->dma);
+
+ kfree(mailbox);
+}
+
+static struct xsc_cmd_msg *xsc_alloc_cmd_msg(struct xsc_core_device *xdev,
+ gfp_t flags, int size)
+{
+ struct xsc_cmd_mailbox *tmp, *head = NULL;
+ struct xsc_cmd_prot_block *block;
+ struct xsc_cmd_msg *msg;
+ int blen;
+ int err;
+ int n;
+ int i;
+
+ msg = kzalloc(sizeof(*msg), GFP_KERNEL);
+ if (!msg)
+ return ERR_PTR(-ENOMEM);
+
+ blen = size - min_t(int, sizeof(msg->first.data), size);
+ n = (blen + XSC_CMD_DATA_BLOCK_SIZE - 1) / XSC_CMD_DATA_BLOCK_SIZE;
+
+ for (i = 0; i < n; i++) {
+ tmp = alloc_cmd_box(xdev, flags);
+ if (IS_ERR(tmp)) {
+ xsc_core_warn(xdev, "failed allocating block\n");
+ err = PTR_ERR(tmp);
+ goto err_alloc;
+ }
+
+ block = tmp->buf;
+ tmp->next = head;
+ block->next = cpu_to_be64(tmp->next ? tmp->next->dma : 0);
+ block->block_num = cpu_to_be32(n - i - 1);
+ head = tmp;
+ }
+ msg->next = head;
+ msg->len = size;
+ return msg;
+
+err_alloc:
+ while (head) {
+ tmp = head->next;
+ free_cmd_box(xdev, head);
+ head = tmp;
+ }
+ kfree(msg);
+
+ return ERR_PTR(err);
+}
+
+static void xsc_free_cmd_msg(struct xsc_core_device *xdev,
+ struct xsc_cmd_msg *msg)
+{
+ struct xsc_cmd_mailbox *head = msg->next;
+ struct xsc_cmd_mailbox *next;
+
+ while (head) {
+ next = head->next;
+ free_cmd_box(xdev, head);
+ head = next;
+ }
+ kfree(msg);
+}
+
+static struct xsc_rsp_msg *xsc_alloc_rsp_msg(struct xsc_core_device *xdev,
+ gfp_t flags, int size)
+{
+ struct xsc_cmd_mailbox *tmp, *head = NULL;
+ struct xsc_cmd_prot_block *block;
+ struct xsc_rsp_msg *msg;
+ int blen;
+ int err;
+ int n;
+ int i;
+
+ msg = kzalloc(sizeof(*msg), GFP_KERNEL);
+ if (!msg)
+ return ERR_PTR(-ENOMEM);
+
+ blen = size - min_t(int, sizeof(msg->first.data), size);
+ n = (blen + XSC_CMD_DATA_BLOCK_SIZE - 1) / XSC_CMD_DATA_BLOCK_SIZE;
+
+ for (i = 0; i < n; i++) {
+ tmp = alloc_cmd_box(xdev, flags);
+ if (IS_ERR(tmp)) {
+ xsc_core_warn(xdev, "failed allocating block\n");
+ err = PTR_ERR(tmp);
+ goto err_alloc;
+ }
+
+ block = tmp->buf;
+ tmp->next = head;
+ block->next = cpu_to_be64(tmp->next ? tmp->next->dma : 0);
+ block->block_num = cpu_to_be32(n - i - 1);
+ head = tmp;
+ }
+ msg->next = head;
+ msg->len = size;
+ return msg;
+
+err_alloc:
+ while (head) {
+ tmp = head->next;
+ free_cmd_box(xdev, head);
+ head = tmp;
+ }
+ kfree(msg);
+
+ return ERR_PTR(err);
+}
+
+static void xsc_free_rsp_msg(struct xsc_core_device *xdev,
+ struct xsc_rsp_msg *msg)
+{
+ struct xsc_cmd_mailbox *head = msg->next;
+ struct xsc_cmd_mailbox *next;
+
+ while (head) {
+ next = head->next;
+ free_cmd_box(xdev, head);
+ head = next;
+ }
+ kfree(msg);
+}
+
+static void set_wqname(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+
+ snprintf(cmd->wq_name, sizeof(cmd->wq_name), "xsc_cmd_%s",
+ dev_name(&xdev->pdev->dev));
+}
+
+void xsc_cmd_use_events(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ int i;
+
+ for (i = 0; i < cmd->max_reg_cmds; i++)
+ down(&cmd->sem);
+
+ flush_workqueue(cmd->wq);
+
+ cmd->mode = CMD_MODE_EVENTS;
+
+ while (cmd->cmd_pid != cmd->cq_cid)
+ msleep(20);
+ kthread_stop(cmd->cq_task);
+ cmd->cq_task = NULL;
+
+ for (i = 0; i < cmd->max_reg_cmds; i++)
+ up(&cmd->sem);
+}
+
+static int cmd_cq_polling(void *data);
+void xsc_cmd_use_polling(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ int i;
+
+ for (i = 0; i < cmd->max_reg_cmds; i++)
+ down(&cmd->sem);
+
+ flush_workqueue(cmd->wq);
+ cmd->mode = CMD_MODE_POLLING;
+ cmd->cq_task = kthread_create(cmd_cq_polling, (void *)xdev, "xsc_cmd_cq_polling");
+ if (cmd->cq_task)
+ wake_up_process(cmd->cq_task);
+
+ for (i = 0; i < cmd->max_reg_cmds; i++)
+ up(&cmd->sem);
+}
+
+static int status_to_err(u8 status)
+{
+ return status ? -1 : 0; /* TBD more meaningful codes */
+}
+
+static struct xsc_cmd_msg *alloc_msg(struct xsc_core_device *xdev, int in_size)
+{
+ struct xsc_cmd_msg *msg = ERR_PTR(-ENOMEM);
+ struct xsc_cmd *cmd = &xdev->cmd;
+ struct cache_ent *ent = NULL;
+
+ if (in_size > MED_LIST_SIZE && in_size <= LONG_LIST_SIZE)
+ ent = &cmd->cache.large;
+ else if (in_size > 16 && in_size <= MED_LIST_SIZE)
+ ent = &cmd->cache.med;
+
+ if (ent) {
+ spin_lock(&ent->lock);
+ if (!list_empty(&ent->head)) {
+ msg = list_entry(ent->head.next, typeof(*msg), list);
+ /* For cached lists, we must explicitly state what is
+ * the real size
+ */
+ msg->len = in_size;
+ list_del(&msg->list);
+ }
+ spin_unlock(&ent->lock);
+ }
+
+ if (IS_ERR(msg))
+ msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, in_size);
+
+ return msg;
+}
+
+static void free_msg(struct xsc_core_device *xdev, struct xsc_cmd_msg *msg)
+{
+ if (msg->cache) {
+ spin_lock(&msg->cache->lock);
+ list_add_tail(&msg->list, &msg->cache->head);
+ spin_unlock(&msg->cache->lock);
+ } else {
+ xsc_free_cmd_msg(xdev, msg);
+ }
+}
+
+static int dummy_work(struct xsc_core_device *xdev, struct xsc_cmd_msg *in,
+ struct xsc_rsp_msg *out, u16 dummy_cnt, u16 dummy_start_pid)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ struct xsc_cmd_work_ent **dummy_ent_arr;
+ struct xsc_cmd_layout *lay;
+ struct semaphore *sem;
+ int err = 0;
+ u16 i;
+ u16 free_cnt = 0;
+ u16 temp_pid = dummy_start_pid;
+
+ sem = &cmd->sem;
+
+ dummy_ent_arr = kcalloc(dummy_cnt, sizeof(struct xsc_cmd_work_ent *), GFP_KERNEL);
+ if (!dummy_ent_arr) {
+ err = -ENOMEM;
+ goto alloc_ent_arr_err;
+ }
+
+ for (i = 0; i < dummy_cnt; i++) {
+ dummy_ent_arr[i] = alloc_cmd(cmd, in, out);
+ if (IS_ERR(dummy_ent_arr[i])) {
+ xsc_core_err(xdev, "failed to alloc cmd buffer\n");
+ err = -ENOMEM;
+ free_cnt = i;
+ goto alloc_ent_err;
+ }
+
+ down(sem);
+
+ dummy_ent_arr[i]->idx = alloc_ent(cmd);
+ if (dummy_ent_arr[i]->idx < 0) {
+ xsc_core_err(xdev, "failed to allocate command entry\n");
+ err = -1;
+ free_cnt = i;
+ goto get_cmd_ent_idx_err;
+ }
+ dummy_ent_arr[i]->token = alloc_token(cmd);
+ cmd->ent_arr[dummy_ent_arr[i]->idx] = dummy_ent_arr[i];
+ init_completion(&dummy_ent_arr[i]->done);
+
+ lay = get_inst(cmd, temp_pid);
+ dummy_ent_arr[i]->lay = lay;
+ memset(lay, 0, sizeof(*lay));
+ memcpy(lay->in, dummy_ent_arr[i]->in->first.data, sizeof(dummy_ent_arr[i]->in));
+ lay->inlen = cpu_to_be32(dummy_ent_arr[i]->in->len);
+ lay->outlen = cpu_to_be32(dummy_ent_arr[i]->out->len);
+ lay->type = XSC_PCI_CMD_XPORT;
+ lay->token = dummy_ent_arr[i]->token;
+ lay->idx = dummy_ent_arr[i]->idx;
+ if (!cmd->checksum_disabled)
+ set_signature(dummy_ent_arr[i]);
+ else
+ lay->sig = 0xff;
+ temp_pid = (temp_pid + 1) % (1 << cmd->log_sz);
+ }
+
+ /* ring doorbell after the descriptor is valid */
+ wmb();
+ writel(cmd->cmd_pid, REG_ADDR(xdev, cmd->reg.req_pid_addr));
+ if (readl(REG_ADDR(xdev, cmd->reg.interrupt_stat_addr)) != 0)
+ writel(0xF, REG_ADDR(xdev, cmd->reg.interrupt_stat_addr));
+
+ xsc_core_dbg(xdev, "write 0x%x to command doorbell, idx %u ~ %u\n", cmd->cmd_pid,
+ dummy_ent_arr[0]->idx, dummy_ent_arr[dummy_cnt - 1]->idx);
+
+ if (wait_for_completion_timeout(&dummy_ent_arr[dummy_cnt - 1]->done,
+ msecs_to_jiffies(3000)) == 0) {
+ xsc_core_err(xdev, "dummy_cmd %d ent timeout, cmdq fail\n", dummy_cnt - 1);
+ err = -ETIMEDOUT;
+ } else {
+ xsc_core_dbg(xdev, "%d ent done\n", dummy_cnt);
+ }
+
+ for (i = 0; i < dummy_cnt; i++)
+ free_cmd(dummy_ent_arr[i]);
+
+ kfree(dummy_ent_arr);
+ return err;
+
+get_cmd_ent_idx_err:
+ free_cmd(dummy_ent_arr[free_cnt]);
+ up(sem);
+alloc_ent_err:
+ for (i = 0; i < free_cnt; i++) {
+ free_ent(cmd, dummy_ent_arr[i]->idx);
+ up(sem);
+ free_cmd(dummy_ent_arr[i]);
+ }
+ kfree(dummy_ent_arr);
+alloc_ent_arr_err:
+ return err;
+}
+
+static int xsc_dummy_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out,
+ int out_size, u16 dmmy_cnt, u16 dummy_start)
+{
+ struct xsc_cmd_msg *inb;
+ struct xsc_rsp_msg *outb;
+ int err;
+
+ inb = alloc_msg(xdev, in_size);
+ if (IS_ERR(inb)) {
+ err = PTR_ERR(inb);
+ return err;
+ }
+
+ err = xsc_copy_to_cmd_msg(inb, in, in_size);
+ if (err) {
+ xsc_core_warn(xdev, "err %d\n", err);
+ goto out_in;
+ }
+
+ outb = xsc_alloc_rsp_msg(xdev, GFP_KERNEL, out_size);
+ if (IS_ERR(outb)) {
+ err = PTR_ERR(outb);
+ goto out_in;
+ }
+
+ err = dummy_work(xdev, inb, outb, dmmy_cnt, dummy_start);
+
+ if (err)
+ goto out_out;
+
+ err = xsc_copy_from_rsp_msg(out, outb, out_size);
+
+out_out:
+ xsc_free_rsp_msg(xdev, outb);
+
+out_in:
+ free_msg(xdev, inb);
+ return err;
+}
+
+static int xsc_send_dummy_cmd(struct xsc_core_device *xdev, u16 gap, u16 dummy_start)
+{
+ struct xsc_cmd_dummy_mbox_out *out;
+ struct xsc_cmd_dummy_mbox_in in;
+ int err;
+
+ out = kzalloc(sizeof(*out), GFP_KERNEL);
+ if (!out) {
+ err = -ENOMEM;
+ goto no_mem_out;
+ }
+
+ memset(&in, 0, sizeof(in));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DUMMY);
+
+ err = xsc_dummy_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out), gap, dummy_start);
+ if (err)
+ goto out_out;
+
+ if (out->hdr.status) {
+ err = xsc_cmd_status_to_err(&out->hdr);
+ goto out_out;
+ }
+
+out_out:
+ kfree(out);
+no_mem_out:
+ return err;
+}
+
+static int request_pid_cid_mismatch_restore(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ u16 req_pid, req_cid;
+ u16 gap;
+
+ int err;
+
+ req_pid = readl(REG_ADDR(xdev, cmd->reg.req_pid_addr));
+ req_cid = readl(REG_ADDR(xdev, cmd->reg.req_cid_addr));
+ if (req_pid >= (1 << cmd->log_sz) || req_cid >= (1 << cmd->log_sz)) {
+ xsc_core_err(xdev, "req_pid %d, req_cid %d, out of normal range!!! max value is %d\n",
+ req_pid, req_cid, (1 << cmd->log_sz));
+ return -1;
+ }
+
+ if (req_pid == req_cid)
+ return 0;
+
+ gap = (req_pid > req_cid) ? (req_pid - req_cid) : ((1 << cmd->log_sz) + req_pid - req_cid);
+ xsc_core_info(xdev, "Cmdq req_pid %d, req_cid %d, send %d dummy cmds\n",
+ req_pid, req_cid, gap);
+
+ err = xsc_send_dummy_cmd(xdev, gap, req_cid);
+ if (err) {
+ xsc_core_err(xdev, "Send dummy cmd failed\n");
+ goto send_dummy_fail;
+ }
+
+send_dummy_fail:
+ return err;
+}
+
+static int _xsc_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out,
+ int out_size)
+{
+ struct xsc_cmd_msg *inb;
+ struct xsc_rsp_msg *outb;
+ int err;
+ u8 status = 0;
+ struct xsc_cmd *cmd = &xdev->cmd;
+
+ if (cmd->cmd_status == XSC_CMD_STATUS_TIMEDOUT)
+ return -ETIMEDOUT;
+
+ inb = alloc_msg(xdev, in_size);
+ if (IS_ERR(inb)) {
+ err = PTR_ERR(inb);
+ return err;
+ }
+
+ err = xsc_copy_to_cmd_msg(inb, in, in_size);
+ if (err) {
+ xsc_core_warn(xdev, "err %d\n", err);
+ goto out_in;
+ }
+
+ outb = xsc_alloc_rsp_msg(xdev, GFP_KERNEL, out_size);
+ if (IS_ERR(outb)) {
+ err = PTR_ERR(outb);
+ goto out_in;
+ }
+
+ err = xsc_cmd_invoke(xdev, inb, outb, &status);
+ if (err)
+ goto out_out;
+
+ if (status) {
+ xsc_core_err(xdev, "opcode:%#x, err %d, status %d\n",
+ msg_to_opcode(inb), err, status);
+ err = status_to_err(status);
+ goto out_out;
+ }
+
+ err = xsc_copy_from_rsp_msg(out, outb, out_size);
+
+out_out:
+ xsc_free_rsp_msg(xdev, outb);
+
+out_in:
+ free_msg(xdev, inb);
+ return err;
+}
+
+int xsc_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out,
+ int out_size)
+{
+ struct xsc_inbox_hdr *hdr = (struct xsc_inbox_hdr *)in;
+
+ hdr->ver = 0;
+ if (hdr->ver != 0) {
+ xsc_core_warn(xdev, "recv an unexpected cmd ver = %d, opcode = %d\n",
+ be16_to_cpu(hdr->ver), be16_to_cpu(hdr->opcode));
+ WARN_ON(hdr->ver != 0);
+ }
+
+ return _xsc_cmd_exec(xdev, in, in_size, out, out_size);
+}
+EXPORT_SYMBOL(xsc_cmd_exec);
+
+static void destroy_msg_cache(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ struct xsc_cmd_msg *msg;
+ struct xsc_cmd_msg *n;
+
+ list_for_each_entry_safe(msg, n, &cmd->cache.large.head, list) {
+ list_del(&msg->list);
+ xsc_free_cmd_msg(xdev, msg);
+ }
+
+ list_for_each_entry_safe(msg, n, &cmd->cache.med.head, list) {
+ list_del(&msg->list);
+ xsc_free_cmd_msg(xdev, msg);
+ }
+}
+
+static int create_msg_cache(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ struct xsc_cmd_msg *msg;
+ int err;
+ int i;
+
+ spin_lock_init(&cmd->cache.large.lock);
+ INIT_LIST_HEAD(&cmd->cache.large.head);
+ spin_lock_init(&cmd->cache.med.lock);
+ INIT_LIST_HEAD(&cmd->cache.med.head);
+
+ for (i = 0; i < NUM_LONG_LISTS; i++) {
+ msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, LONG_LIST_SIZE);
+ if (IS_ERR(msg)) {
+ err = PTR_ERR(msg);
+ goto ex_err;
+ }
+ msg->cache = &cmd->cache.large;
+ list_add_tail(&msg->list, &cmd->cache.large.head);
+ }
+
+ for (i = 0; i < NUM_MED_LISTS; i++) {
+ msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, MED_LIST_SIZE);
+ if (IS_ERR(msg)) {
+ err = PTR_ERR(msg);
+ goto ex_err;
+ }
+ msg->cache = &cmd->cache.med;
+ list_add_tail(&msg->list, &cmd->cache.med.head);
+ }
+
+ return 0;
+
+ex_err:
+ destroy_msg_cache(xdev);
+ return err;
+}
+
+static void xsc_cmd_comp_handler(struct xsc_core_device *xdev, u8 idx, struct xsc_rsp_layout *rsp)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ struct xsc_cmd_work_ent *ent;
+ struct xsc_inbox_hdr *hdr;
+
+ if (idx > cmd->max_reg_cmds || (cmd->bitmask & (1 << idx))) {
+ xsc_core_err(xdev, "idx[%d] exceed max cmds, or has no relative request.\n", idx);
+ return;
+ }
+ ent = cmd->ent_arr[idx];
+ ent->rsp_lay = rsp;
+ ktime_get_ts64(&ent->ts2);
+
+ memcpy(ent->out->first.data, ent->rsp_lay->out, sizeof(ent->rsp_lay->out));
+ dump_command(xdev, ent->out->next, ent, 0, ent->out->len);
+ if (!cmd->checksum_disabled)
+ ent->ret = verify_signature(ent);
+ else
+ ent->ret = 0;
+ ent->status = 0;
+
+ hdr = (struct xsc_inbox_hdr *)ent->in->first.data;
+ xsc_core_dbg(xdev, "delivery status:%s(%d), rsp status=%d, opcode %#x, idx:%d,%d, ret=%d\n",
+ deliv_status_to_str(ent->status), ent->status,
+ ((struct xsc_outbox_hdr *)ent->rsp_lay->out)->status,
+ __be16_to_cpu(hdr->opcode), idx, ent->lay->idx, ent->ret);
+ free_ent(cmd, ent->idx);
+ complete(&ent->done);
+ up(&cmd->sem);
+}
+
+static int cmd_cq_polling(void *data)
+{
+ struct xsc_core_device *xdev = data;
+ struct xsc_cmd *cmd = &xdev->cmd;
+ struct xsc_rsp_layout *rsp;
+ u32 cq_pid;
+
+ while (!kthread_should_stop()) {
+ if (need_resched())
+ schedule();
+ cq_pid = readl(REG_ADDR(xdev, cmd->reg.rsp_pid_addr));
+ if (cmd->cq_cid == cq_pid) {
+#ifdef COSIM
+ mdelay(1000);
+#else
+ mdelay(3);
+#endif
+ continue;
+ }
+
+ //get cqe
+ rsp = get_cq_inst(cmd, cmd->cq_cid);
+ if (!cmd->ownerbit_learned) {
+ cmd->ownerbit_learned = 1;
+ cmd->owner_bit = rsp->owner_bit;
+ }
+ if (cmd->owner_bit != rsp->owner_bit) {
+ //hw update cq doorbell but buf may not ready
+ xsc_core_err(xdev, "hw update cq doorbell but buf not ready %u %u\n",
+ cmd->cq_cid, cq_pid);
+ continue;
+ }
+
+ xsc_cmd_comp_handler(xdev, rsp->idx, rsp);
+
+ cmd->cq_cid = (cmd->cq_cid + 1) % (1 << cmd->log_sz);
+
+ writel(cmd->cq_cid, REG_ADDR(xdev, cmd->reg.rsp_cid_addr));
+ if (cmd->cq_cid == 0)
+ cmd->owner_bit = !cmd->owner_bit;
+ }
+ return 0;
+}
+
+int xsc_cmd_err_handler(struct xsc_core_device *xdev)
+{
+ union interrupt_stat {
+ struct {
+ u32 hw_read_req_err:1;
+ u32 hw_write_req_err:1;
+ u32 req_pid_err:1;
+ u32 rsp_cid_err:1;
+ };
+ u32 raw;
+ } stat;
+ int err = 0;
+ int retry = 0;
+
+ stat.raw = readl(REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr));
+ while (stat.raw != 0) {
+ err++;
+ if (stat.hw_read_req_err) {
+ retry = 1;
+ stat.hw_read_req_err = 0;
+ xsc_core_err(xdev, "hw report read req from host failed!\n");
+ } else if (stat.hw_write_req_err) {
+ retry = 1;
+ stat.hw_write_req_err = 0;
+ xsc_core_err(xdev, "hw report write req to fw failed!\n");
+ } else if (stat.req_pid_err) {
+ stat.req_pid_err = 0;
+ xsc_core_err(xdev, "hw report unexpected req pid!\n");
+ } else if (stat.rsp_cid_err) {
+ stat.rsp_cid_err = 0;
+ xsc_core_err(xdev, "hw report unexpected rsp cid!\n");
+ } else {
+ stat.raw = 0;
+ xsc_core_err(xdev, "ignore unknown interrupt!\n");
+ }
+ }
+
+ if (retry)
+ writel(xdev->cmd.cmd_pid, REG_ADDR(xdev, xdev->cmd.reg.req_pid_addr));
+
+ if (err)
+ writel(0xf, REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr));
+
+ return err;
+}
+
+void xsc_cmd_resp_handler(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+ struct xsc_rsp_layout *rsp;
+ u32 cq_pid;
+ const int budget = 32;
+ int count = 0;
+
+ while (count < budget) {
+ cq_pid = readl(REG_ADDR(xdev, cmd->reg.rsp_pid_addr));
+ if (cq_pid == cmd->cq_cid)
+ return;
+
+ rsp = get_cq_inst(cmd, cmd->cq_cid);
+ if (!cmd->ownerbit_learned) {
+ cmd->ownerbit_learned = 1;
+ cmd->owner_bit = rsp->owner_bit;
+ }
+ if (cmd->owner_bit != rsp->owner_bit) {
+ xsc_core_err(xdev, "hw update cq doorbell but buf not ready %u %u\n",
+ cmd->cq_cid, cq_pid);
+ return;
+ }
+
+ xsc_cmd_comp_handler(xdev, rsp->idx, rsp);
+
+ cmd->cq_cid = (cmd->cq_cid + 1) % (1 << cmd->log_sz);
+ writel(cmd->cq_cid, REG_ADDR(xdev, cmd->reg.rsp_cid_addr));
+ if (cmd->cq_cid == 0)
+ cmd->owner_bit = !cmd->owner_bit;
+
+ count++;
+ }
+}
+
+static void xsc_cmd_handle_rsp_before_reload
+(struct xsc_cmd *cmd, struct xsc_core_device *xdev)
+{
+ u32 rsp_pid, rsp_cid;
+
+ rsp_pid = readl(REG_ADDR(xdev, cmd->reg.rsp_pid_addr));
+ rsp_cid = readl(REG_ADDR(xdev, cmd->reg.rsp_cid_addr));
+ if (rsp_pid == rsp_cid)
+ return;
+
+ cmd->cq_cid = rsp_pid;
+
+ writel(cmd->cq_cid, REG_ADDR(xdev, cmd->reg.rsp_cid_addr));
+}
+
+int xsc_cmd_init(struct xsc_core_device *xdev)
+{
+ int size = sizeof(struct xsc_cmd_prot_block);
+ int align = roundup_pow_of_two(size);
+ struct xsc_cmd *cmd = &xdev->cmd;
+ u32 cmd_h, cmd_l;
+ u32 err_stat;
+ int err;
+ int i;
+
+ //sriov need adapt for this process.
+ //now there is 544 cmdq resource, soc using from id 514
+ cmd->reg.req_pid_addr = HIF_CMDQM_HOST_REQ_PID_MEM_ADDR;
+ cmd->reg.req_cid_addr = HIF_CMDQM_HOST_REQ_CID_MEM_ADDR;
+ cmd->reg.rsp_pid_addr = HIF_CMDQM_HOST_RSP_PID_MEM_ADDR;
+ cmd->reg.rsp_cid_addr = HIF_CMDQM_HOST_RSP_CID_MEM_ADDR;
+ cmd->reg.req_buf_h_addr = HIF_CMDQM_HOST_REQ_BUF_BASE_H_ADDR_MEM_ADDR;
+ cmd->reg.req_buf_l_addr = HIF_CMDQM_HOST_REQ_BUF_BASE_L_ADDR_MEM_ADDR;
+ cmd->reg.rsp_buf_h_addr = HIF_CMDQM_HOST_RSP_BUF_BASE_H_ADDR_MEM_ADDR;
+ cmd->reg.rsp_buf_l_addr = HIF_CMDQM_HOST_RSP_BUF_BASE_L_ADDR_MEM_ADDR;
+ cmd->reg.msix_vec_addr = HIF_CMDQM_VECTOR_ID_MEM_ADDR;
+ cmd->reg.element_sz_addr = HIF_CMDQM_Q_ELEMENT_SZ_REG_ADDR;
+ cmd->reg.q_depth_addr = HIF_CMDQM_HOST_Q_DEPTH_REG_ADDR;
+ cmd->reg.interrupt_stat_addr = HIF_CMDQM_HOST_VF_ERR_STS_MEM_ADDR;
+
+ cmd->pool = dma_pool_create("xsc_cmd", &xdev->pdev->dev, size, align, 0);
+
+ if (!cmd->pool)
+ return -ENOMEM;
+
+ cmd->cmd_buf = (void *)__get_free_pages(GFP_ATOMIC, 0);
+ if (!cmd->cmd_buf) {
+ err = -ENOMEM;
+ goto err_free_pool;
+ }
+ cmd->cq_buf = (void *)__get_free_pages(GFP_ATOMIC, 0);
+ if (!cmd->cq_buf) {
+ err = -ENOMEM;
+ goto err_free_cmd;
+ }
+
+ cmd->dma = dma_map_single(&xdev->pdev->dev, cmd->cmd_buf, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(&xdev->pdev->dev, cmd->dma)) {
+ err = -ENOMEM;
+ goto err_free;
+ }
+
+ cmd->cq_dma = dma_map_single(&xdev->pdev->dev, cmd->cq_buf, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(&xdev->pdev->dev, cmd->cq_dma)) {
+ err = -ENOMEM;
+ goto err_map_cmd;
+ }
+
+ cmd->cmd_pid = readl(REG_ADDR(xdev, cmd->reg.req_pid_addr));
+ cmd->cq_cid = readl(REG_ADDR(xdev, cmd->reg.rsp_cid_addr));
+ cmd->ownerbit_learned = 0;
+
+ xsc_cmd_handle_rsp_before_reload(cmd, xdev);
+
+#define ELEMENT_SIZE_LOG 6 //64B
+#define Q_DEPTH_LOG 5 //32
+
+ cmd->log_sz = Q_DEPTH_LOG;
+ cmd->log_stride = readl(REG_ADDR(xdev, cmd->reg.element_sz_addr));
+ writel(1 << cmd->log_sz, REG_ADDR(xdev, cmd->reg.q_depth_addr));
+ if (cmd->log_stride != ELEMENT_SIZE_LOG) {
+ dev_err(&xdev->pdev->dev, "firmware failed to init cmdq, log_stride=(%d, %d)\n",
+ cmd->log_stride, ELEMENT_SIZE_LOG);
+ err = -ENODEV;
+ goto err_map;
+ }
+
+ if (1 << cmd->log_sz > XSC_MAX_COMMANDS) {
+ dev_err(&xdev->pdev->dev, "firmware reports too many outstanding commands %d\n",
+ 1 << cmd->log_sz);
+ err = -EINVAL;
+ goto err_map;
+ }
+
+ if (cmd->log_sz + cmd->log_stride > PAGE_SHIFT) {
+ dev_err(&xdev->pdev->dev, "command queue size overflow\n");
+ err = -EINVAL;
+ goto err_map;
+ }
+
+ cmd->checksum_disabled = 1;
+ cmd->max_reg_cmds = (1 << cmd->log_sz) - 1;
+ cmd->bitmask = (1 << cmd->max_reg_cmds) - 1;
+
+ spin_lock_init(&cmd->alloc_lock);
+ spin_lock_init(&cmd->token_lock);
+ spin_lock_init(&cmd->doorbell_lock);
+ for (i = 0; i < ARRAY_SIZE(cmd->stats); i++)
+ spin_lock_init(&cmd->stats[i].lock);
+
+ sema_init(&cmd->sem, cmd->max_reg_cmds);
+
+ cmd_h = (u32)((u64)(cmd->dma) >> 32);
+ cmd_l = (u32)(cmd->dma);
+ if (cmd_l & 0xfff) {
+ dev_err(&xdev->pdev->dev, "invalid command queue address\n");
+ err = -ENOMEM;
+ goto err_map;
+ }
+
+ writel(cmd_h, REG_ADDR(xdev, cmd->reg.req_buf_h_addr));
+ writel(cmd_l, REG_ADDR(xdev, cmd->reg.req_buf_l_addr));
+
+ cmd_h = (u32)((u64)(cmd->cq_dma) >> 32);
+ cmd_l = (u32)(cmd->cq_dma);
+ if (cmd_l & 0xfff) {
+ dev_err(&xdev->pdev->dev, "invalid command queue address\n");
+ err = -ENOMEM;
+ goto err_map;
+ }
+ writel(cmd_h, REG_ADDR(xdev, cmd->reg.rsp_buf_h_addr));
+ writel(cmd_l, REG_ADDR(xdev, cmd->reg.rsp_buf_l_addr));
+
+ /* Make sure firmware sees the complete address before we proceed */
+ wmb();
+
+ xsc_core_dbg(xdev, "descriptor at dma 0x%llx 0x%llx\n",
+ (unsigned long long)(cmd->dma), (unsigned long long)(cmd->cq_dma));
+
+ cmd->mode = CMD_MODE_POLLING;
+ cmd->cmd_status = XSC_CMD_STATUS_NORMAL;
+
+ err = create_msg_cache(xdev);
+ if (err) {
+ dev_err(&xdev->pdev->dev, "failed to create command cache\n");
+ goto err_map;
+ }
+
+ set_wqname(xdev);
+ cmd->wq = create_singlethread_workqueue(cmd->wq_name);
+ if (!cmd->wq) {
+ dev_err(&xdev->pdev->dev, "failed to create command workqueue\n");
+ err = -ENOMEM;
+ goto err_cache;
+ }
+
+ cmd->cq_task = kthread_create(cmd_cq_polling, (void *)xdev, "xsc_cmd_cq_polling");
+ if (!cmd->cq_task) {
+ dev_err(&xdev->pdev->dev, "failed to create cq task\n");
+ err = -ENOMEM;
+ goto err_wq;
+ }
+ wake_up_process(cmd->cq_task);
+
+ err = request_pid_cid_mismatch_restore(xdev);
+ if (err) {
+ dev_err(&xdev->pdev->dev, "request pid,cid wrong, restore failed\n");
+ goto err_req_restore;
+ }
+
+ // clear abnormal state to avoid the impact of previous error
+ err_stat = readl(REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr));
+ if (err_stat) {
+ xsc_core_warn(xdev, "err_stat 0x%x when initializing, clear it\n", err_stat);
+ writel(0xf, REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr));
+ }
+
+ return 0;
+
+err_req_restore:
+ kthread_stop(cmd->cq_task);
+
+err_wq:
+ destroy_workqueue(cmd->wq);
+
+err_cache:
+ destroy_msg_cache(xdev);
+
+err_map:
+ dma_unmap_single(&xdev->pdev->dev, cmd->cq_dma, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+
+err_map_cmd:
+ dma_unmap_single(&xdev->pdev->dev, cmd->dma, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+err_free:
+ free_pages((unsigned long)cmd->cq_buf, 0);
+
+err_free_cmd:
+ free_pages((unsigned long)cmd->cmd_buf, 0);
+
+err_free_pool:
+ dma_pool_destroy(cmd->pool);
+
+ return err;
+}
+
+void xsc_cmd_cleanup(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd *cmd = &xdev->cmd;
+
+ destroy_workqueue(cmd->wq);
+ if (cmd->cq_task)
+ kthread_stop(cmd->cq_task);
+ destroy_msg_cache(xdev);
+ dma_unmap_single(&xdev->pdev->dev, cmd->dma, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ free_pages((unsigned long)cmd->cq_buf, 0);
+ dma_unmap_single(&xdev->pdev->dev, cmd->cq_dma, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ free_pages((unsigned long)cmd->cmd_buf, 0);
+ dma_pool_destroy(cmd->pool);
+}
+
+int xsc_cmd_version_check(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd_query_cmdq_ver_mbox_out *out;
+ struct xsc_cmd_query_cmdq_ver_mbox_in in;
+
+ int err;
+
+ out = kzalloc(sizeof(*out), GFP_KERNEL);
+ if (!out) {
+ err = -ENOMEM;
+ goto no_mem_out;
+ }
+
+ memset(&in, 0, sizeof(in));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_CMDQ_VERSION);
+
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out));
+ if (err)
+ goto out_out;
+
+ if (out->hdr.status) {
+ err = xsc_cmd_status_to_err(&out->hdr);
+ goto out_out;
+ }
+
+ if (be16_to_cpu(out->cmdq_ver) != CMDQ_VERSION) {
+ xsc_core_err(xdev, "cmdq version check failed, expecting version %d, actual version %d\n",
+ CMDQ_VERSION, be16_to_cpu(out->cmdq_ver));
+ err = -EINVAL;
+ goto out_out;
+ }
+ xdev->cmdq_ver = CMDQ_VERSION;
+
+out_out:
+ kfree(out);
+no_mem_out:
+ return err;
+}
+
+static const char *cmd_status_str(u8 status)
+{
+ switch (status) {
+ case XSC_CMD_STAT_OK:
+ return "OK";
+ case XSC_CMD_STAT_INT_ERR:
+ return "internal error";
+ case XSC_CMD_STAT_BAD_OP_ERR:
+ return "bad operation";
+ case XSC_CMD_STAT_BAD_PARAM_ERR:
+ return "bad parameter";
+ case XSC_CMD_STAT_BAD_SYS_STATE_ERR:
+ return "bad system state";
+ case XSC_CMD_STAT_BAD_RES_ERR:
+ return "bad resource";
+ case XSC_CMD_STAT_RES_BUSY:
+ return "resource busy";
+ case XSC_CMD_STAT_LIM_ERR:
+ return "limits exceeded";
+ case XSC_CMD_STAT_BAD_RES_STATE_ERR:
+ return "bad resource state";
+ case XSC_CMD_STAT_IX_ERR:
+ return "bad index";
+ case XSC_CMD_STAT_NO_RES_ERR:
+ return "no resources";
+ case XSC_CMD_STAT_BAD_INP_LEN_ERR:
+ return "bad input length";
+ case XSC_CMD_STAT_BAD_OUTP_LEN_ERR:
+ return "bad output length";
+ case XSC_CMD_STAT_BAD_QP_STATE_ERR:
+ return "bad QP state";
+ case XSC_CMD_STAT_BAD_PKT_ERR:
+ return "bad packet (discarded)";
+ case XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR:
+ return "bad size too many outstanding CQEs";
+ default:
+ return "unknown status";
+ }
+}
+
+int xsc_cmd_status_to_err(struct xsc_outbox_hdr *hdr)
+{
+ if (!hdr->status)
+ return 0;
+
+ pr_warn("command failed, status %s(0x%x)\n",
+ cmd_status_str(hdr->status), hdr->status);
+
+ switch (hdr->status) {
+ case XSC_CMD_STAT_OK: return 0;
+ case XSC_CMD_STAT_INT_ERR: return -EIO;
+ case XSC_CMD_STAT_BAD_OP_ERR: return -EOPNOTSUPP;
+ case XSC_CMD_STAT_BAD_PARAM_ERR: return -EINVAL;
+ case XSC_CMD_STAT_BAD_SYS_STATE_ERR: return -EIO;
+ case XSC_CMD_STAT_BAD_RES_ERR: return -EINVAL;
+ case XSC_CMD_STAT_RES_BUSY: return -EBUSY;
+ case XSC_CMD_STAT_LIM_ERR: return -EINVAL;
+ case XSC_CMD_STAT_BAD_RES_STATE_ERR: return -EINVAL;
+ case XSC_CMD_STAT_IX_ERR: return -EINVAL;
+ case XSC_CMD_STAT_NO_RES_ERR: return -EAGAIN;
+ case XSC_CMD_STAT_BAD_INP_LEN_ERR: return -EIO;
+ case XSC_CMD_STAT_BAD_OUTP_LEN_ERR: return -EIO;
+ case XSC_CMD_STAT_BAD_QP_STATE_ERR: return -EINVAL;
+ case XSC_CMD_STAT_BAD_PKT_ERR: return -EINVAL;
+ case XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR: return -EINVAL;
+ default: return -EIO;
+ }
+}
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
index cbe0bfbd1..a7184f2fc 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
@@ -4,6 +4,12 @@
*/
#include "common/xsc_core.h"
+#include "common/xsc_driver.h"
+
+unsigned int xsc_debug_mask;
+module_param_named(debug_mask, xsc_debug_mask, uint, 0644);
+MODULE_PARM_DESC(debug_mask,
+ "debug mask: 1=dump cmd data, 2=dump cmd exec time, 3=both. Default=0");
unsigned int xsc_log_level = XSC_LOG_LEVEL_WARN;
module_param_named(log_level, xsc_log_level, uint, 0644);
@@ -192,6 +198,87 @@ static void xsc_core_dev_cleanup(struct xsc_core_device *xdev)
xsc_dev_res_cleanup(xdev);
}
+static int xsc_hw_setup(struct xsc_core_device *xdev)
+{
+ int err;
+
+ err = xsc_cmd_init(xdev);
+ if (err) {
+ xsc_core_err(xdev, "Failed initializing command interface, aborting\n");
+ goto out;
+ }
+
+ err = xsc_cmd_version_check(xdev);
+ if (err) {
+ xsc_core_err(xdev, "Failed to check cmdq version\n");
+ goto err_cmd_cleanup;
+ }
+
+ return 0;
+err_cmd_cleanup:
+ xsc_cmd_cleanup(xdev);
+out:
+ return err;
+}
+
+static int xsc_hw_cleanup(struct xsc_core_device *xdev)
+{
+ xsc_cmd_cleanup(xdev);
+
+ return 0;
+}
+
+static int xsc_load(struct xsc_core_device *xdev)
+{
+ int err = 0;
+
+ mutex_lock(&xdev->intf_state_mutex);
+ if (test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) {
+ xsc_core_warn(xdev, "interface is up, NOP\n");
+ goto out;
+ }
+
+ if (test_bit(XSC_INTERFACE_STATE_TEARDOWN, &xdev->intf_state)) {
+ xsc_core_warn(xdev, "device is being removed, stop load\n");
+ err = -ENODEV;
+ goto out;
+ }
+
+ err = xsc_hw_setup(xdev);
+ if (err) {
+ xsc_core_err(xdev, "xsc_hw_setup failed %d\n", err);
+ goto out;
+ }
+
+ set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state);
+ mutex_unlock(&xdev->intf_state_mutex);
+
+ return 0;
+out:
+ mutex_unlock(&xdev->intf_state_mutex);
+ return err;
+}
+
+static int xsc_unload(struct xsc_core_device *xdev)
+{
+ mutex_lock(&xdev->intf_state_mutex);
+ if (!test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) {
+ xsc_core_warn(xdev, "%s: interface is down, NOP\n",
+ __func__);
+ xsc_hw_cleanup(xdev);
+ goto out;
+ }
+
+ clear_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state);
+
+ xsc_hw_cleanup(xdev);
+
+out:
+ mutex_unlock(&xdev->intf_state_mutex);
+
+ return 0;
+}
+
static int xsc_pci_probe(struct pci_dev *pci_dev,
const struct pci_device_id *id)
{
@@ -218,7 +305,15 @@ static int xsc_pci_probe(struct pci_dev *pci_dev,
goto err_pci_fini;
}
+ err = xsc_load(xdev);
+ if (err) {
+ xsc_core_err(xdev, "xsc_load failed %d\n", err);
+ goto err_core_dev_cleanup;
+ }
+
return 0;
+err_core_dev_cleanup:
+ xsc_core_dev_cleanup(xdev);
err_pci_fini:
xsc_pci_fini(xdev);
err_unset_pci_drvdata:
@@ -232,6 +327,7 @@ static void xsc_pci_remove(struct pci_dev *pci_dev)
{
struct xsc_core_device *xdev = pci_get_drvdata(pci_dev);
+ xsc_unload(xdev);
xsc_core_dev_cleanup(xdev);
xsc_pci_fini(xdev);
pci_set_drvdata(pci_dev, NULL);
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 03/16] net-next/yunsilicon: Add hardware setup APIs
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
2024-12-18 10:50 ` [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework Xin Tian
2024-12-18 10:50 ` [PATCH v1 02/16] net-next/yunsilicon: Enable CMDQ Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 04/16] net-next/yunsilicon: Add qp and cq management Xin Tian
` (13 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add hardware setup APIs
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 156 ++++++++++
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +-
drivers/net/ethernet/yunsilicon/xsc/pci/hw.c | 269 ++++++++++++++++++
drivers/net/ethernet/yunsilicon/xsc/pci/hw.h | 18 ++
.../net/ethernet/yunsilicon/xsc/pci/main.c | 26 ++
5 files changed, 470 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/hw.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/hw.h
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index 61ae5eafc..a8ac7878b 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -104,10 +104,147 @@ enum {
XSC_MAX_NAME_LEN = 32,
};
+enum {
+ XSC_MAX_PORTS = 2,
+};
+
+enum {
+ XSC_MAX_FW_PORTS = 1,
+};
+
+enum {
+ XSC_BF_REGS_PER_PAGE = 4,
+ XSC_MAX_UAR_PAGES = 1 << 8,
+ XSC_MAX_UUARS = XSC_MAX_UAR_PAGES * XSC_BF_REGS_PER_PAGE,
+};
+
struct xsc_dev_resource {
struct mutex alloc_mutex; /* protect buffer alocation according to numa node */
};
+struct xsc_reg_addr {
+ u64 tx_db;
+ u64 rx_db;
+ u64 complete_db;
+ u64 complete_reg;
+ u64 event_db;
+ u64 cpm_get_lock;
+ u64 cpm_put_lock;
+ u64 cpm_lock_avail;
+ u64 cpm_data_mem;
+ u64 cpm_cmd;
+ u64 cpm_addr;
+ u64 cpm_busy;
+};
+
+struct xsc_board_info {
+ u32 board_id;
+ char board_sn[XSC_BOARD_SN_LEN];
+ __be64 guid;
+ u8 guid_valid;
+ u8 hw_config_activated;
+};
+
+struct xsc_port_caps {
+ int gid_table_len;
+ int pkey_table_len;
+};
+
+struct xsc_caps {
+ u8 log_max_eq;
+ u8 log_max_cq;
+ u8 log_max_qp;
+ u8 log_max_mkey;
+ u8 log_max_pd;
+ u8 log_max_srq;
+ u8 log_max_msix;
+ u32 max_cqes;
+ u32 max_wqes;
+ u32 max_sq_desc_sz;
+ u32 max_rq_desc_sz;
+ u64 flags;
+ u16 stat_rate_support;
+ u32 log_max_msg;
+ u32 num_ports;
+ u32 max_ra_res_qp;
+ u32 max_ra_req_qp;
+ u32 max_srq_wqes;
+ u32 bf_reg_size;
+ u32 bf_regs_per_page;
+ struct xsc_port_caps port[XSC_MAX_PORTS];
+ u8 ext_port_cap[XSC_MAX_PORTS];
+ u32 reserved_lkey;
+ u8 local_ca_ack_delay;
+ u8 log_max_mcg;
+ u16 max_qp_mcg;
+ u32 min_page_sz;
+ u32 send_ds_num;
+ u32 send_wqe_shift;
+ u32 recv_ds_num;
+ u32 recv_wqe_shift;
+ u32 rx_pkt_len_max;
+
+ u32 msix_enable:1;
+ u32 port_type:1;
+ u32 embedded_cpu:1;
+ u32 eswitch_manager:1;
+ u32 ecpf_vport_exists:1;
+ u32 vport_group_manager:1;
+ u32 sf:1;
+ u32 wqe_inline_mode:3;
+ u32 raweth_qp_id_base:15;
+ u32 rsvd0:7;
+
+ u16 max_vfs;
+ u8 log_max_qp_depth;
+ u8 log_max_current_uc_list;
+ u8 log_max_current_mc_list;
+ u16 log_max_vlan_list;
+ u8 fdb_multi_path_to_table;
+ u8 log_esw_max_sched_depth;
+
+ u8 max_num_sf_partitions;
+ u8 log_max_esw_sf;
+ u16 sf_base_id;
+
+ u32 max_tc:8;
+ u32 ets:1;
+ u32 dcbx:1;
+ u32 dscp:1;
+ u32 sbcam_reg:1;
+ u32 qos:1;
+ u32 port_buf:1;
+ u32 rsvd1:2;
+ u32 raw_tpe_qp_num:16;
+ u32 max_num_eqs:8;
+ u32 mac_port:8;
+ u32 raweth_rss_qp_id_base:16;
+ u16 msix_base;
+ u16 msix_num;
+ u8 log_max_mtt;
+ u8 log_max_tso;
+ u32 hca_core_clock;
+ u32 max_rwq_indirection_tables;/*rss_caps*/
+ u32 max_rwq_indirection_table_size;/*rss_caps*/
+ u16 raweth_qp_id_end;
+ u32 qp_rate_limit_min;
+ u32 qp_rate_limit_max;
+ u32 hw_feature_flag;
+ u16 pf0_vf_funcid_base;
+ u16 pf0_vf_funcid_top;
+ u16 pf1_vf_funcid_base;
+ u16 pf1_vf_funcid_top;
+ u16 pcie0_pf_funcid_base;
+ u16 pcie0_pf_funcid_top;
+ u16 pcie1_pf_funcid_base;
+ u16 pcie1_pf_funcid_top;
+ u8 nif_port_num;
+ u8 pcie_host;
+ u8 mac_bit;
+ u16 funcid_to_logic_port;
+ u8 lag_logic_port_ofst;
+};
+
enum xsc_pci_state {
XSC_PCI_STATE_DISABLED,
XSC_PCI_STATE_ENABLED,
@@ -135,6 +272,9 @@ struct xsc_core_device {
void __iomem *bar;
int bar_num;
+ u8 mac_port;
+ u16 glb_func_id;
+
struct xsc_cmd cmd;
u16 cmdq_ver;
@@ -142,6 +282,22 @@ struct xsc_core_device {
enum xsc_pci_state pci_state;
struct mutex intf_state_mutex; /* protect intf_state */
unsigned long intf_state;
+
+ struct xsc_caps caps;
+ struct xsc_board_info *board_info;
+
+ struct xsc_reg_addr regs;
+ u32 chip_ver_h;
+ u32 chip_ver_m;
+ u32 chip_ver_l;
+ u32 hotfix_num;
+ u32 feature_flag;
+
+ u8 fw_version_major;
+ u8 fw_version_minor;
+ u16 fw_version_patch;
+ u32 fw_version_tweak;
+ u8 fw_version_extra_flag;
};
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
index 5e0f0a205..fea625d54 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
@@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
-xsc_pci-y := main.o cmdq.o
+xsc_pci-y := main.o cmdq.o hw.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c
new file mode 100644
index 000000000..f2735783f
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c
@@ -0,0 +1,269 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include "common/xsc_driver.h"
+#include "hw.h"
+
+#define MAX_BOARD_NUM 32
+
+static struct xsc_board_info *board_info[MAX_BOARD_NUM];
+
+static struct xsc_board_info *xsc_get_board_info(char *board_sn)
+{
+ int i;
+
+ for (i = 0; i < MAX_BOARD_NUM; i++) {
+ if (!board_info[i])
+ continue;
+ if (!strncmp(board_info[i]->board_sn, board_sn, XSC_BOARD_SN_LEN))
+ return board_info[i];
+ }
+ return NULL;
+}
+
+static struct xsc_board_info *xsc_alloc_board_info(void)
+{
+ int i;
+
+ for (i = 0; i < MAX_BOARD_NUM; i++) {
+ if (!board_info[i])
+ break;
+ }
+ if (i == MAX_BOARD_NUM)
+ return NULL;
+ board_info[i] = vmalloc(sizeof(*board_info[i]));
+ if (!board_info[i])
+ return NULL;
+ memset(board_info[i], 0, sizeof(*board_info[i]));
+ board_info[i]->board_id = i;
+ return board_info[i];
+}
+
+void xsc_free_board_info(void)
+{
+ int i;
+
+ for (i = 0; i < MAX_BOARD_NUM; i++)
+ vfree(board_info[i]);
+}
+
+int xsc_cmd_query_hca_cap(struct xsc_core_device *xdev,
+ struct xsc_caps *caps)
+{
+ struct xsc_cmd_query_hca_cap_mbox_out *out;
+ struct xsc_cmd_query_hca_cap_mbox_in in;
+ int err;
+ u16 t16;
+ struct xsc_board_info *board_info = NULL;
+
+ out = kzalloc(sizeof(*out), GFP_KERNEL);
+ if (!out)
+ return -ENOMEM;
+
+ memset(&in, 0, sizeof(in));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_HCA_CAP);
+ in.cpu_num = cpu_to_be16(num_online_cpus());
+
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out));
+ if (err)
+ goto out_out;
+
+ if (out->hdr.status) {
+ err = xsc_cmd_status_to_err(&out->hdr);
+ goto out_out;
+ }
+
+ xdev->glb_func_id = be32_to_cpu(out->hca_cap.glb_func_id);
+ caps->pcie0_pf_funcid_base = be16_to_cpu(out->hca_cap.pcie0_pf_funcid_base);
+ caps->pcie0_pf_funcid_top = be16_to_cpu(out->hca_cap.pcie0_pf_funcid_top);
+ caps->pcie1_pf_funcid_base = be16_to_cpu(out->hca_cap.pcie1_pf_funcid_base);
+ caps->pcie1_pf_funcid_top = be16_to_cpu(out->hca_cap.pcie1_pf_funcid_top);
+ caps->funcid_to_logic_port = be16_to_cpu(out->hca_cap.funcid_to_logic_port);
+ xsc_core_dbg(xdev, "pcie0_pf_range=(%4u, %4u), pcie1_pf_range=(%4u, %4u)\n",
+ caps->pcie0_pf_funcid_base, caps->pcie0_pf_funcid_top,
+ caps->pcie1_pf_funcid_base, caps->pcie1_pf_funcid_top);
+
+ caps->pcie_host = out->hca_cap.pcie_host;
+ caps->nif_port_num = out->hca_cap.nif_port_num;
+ caps->hw_feature_flag = be32_to_cpu(out->hca_cap.hw_feature_flag);
+
+ caps->raweth_qp_id_base = be16_to_cpu(out->hca_cap.raweth_qp_id_base);
+ caps->raweth_qp_id_end = be16_to_cpu(out->hca_cap.raweth_qp_id_end);
+ caps->raweth_rss_qp_id_base = be16_to_cpu(out->hca_cap.raweth_rss_qp_id_base);
+ caps->raw_tpe_qp_num = be16_to_cpu(out->hca_cap.raw_tpe_qp_num);
+ caps->max_cqes = 1 << out->hca_cap.log_max_cq_sz;
+ caps->max_wqes = 1 << out->hca_cap.log_max_qp_sz;
+ caps->max_sq_desc_sz = be16_to_cpu(out->hca_cap.max_desc_sz_sq);
+ caps->max_rq_desc_sz = be16_to_cpu(out->hca_cap.max_desc_sz_rq);
+ caps->flags = be64_to_cpu(out->hca_cap.flags);
+ caps->stat_rate_support = be16_to_cpu(out->hca_cap.stat_rate_support);
+ caps->log_max_msg = out->hca_cap.log_max_msg & 0x1f;
+ caps->num_ports = out->hca_cap.num_ports & 0xf;
+ caps->log_max_cq = out->hca_cap.log_max_cq & 0x1f;
+ caps->log_max_eq = out->hca_cap.log_max_eq & 0xf;
+ caps->log_max_msix = out->hca_cap.log_max_msix & 0xf;
+ caps->mac_port = out->hca_cap.mac_port & 0xff;
+ xdev->mac_port = caps->mac_port;
+ if (caps->num_ports > XSC_MAX_FW_PORTS) {
+ xsc_core_err(xdev, "device has %d ports while the driver supports max %d ports\n",
+ caps->num_ports, XSC_MAX_FW_PORTS);
+ err = -EINVAL;
+ goto out_out;
+ }
+ caps->send_ds_num = out->hca_cap.send_seg_num;
+ caps->send_wqe_shift = out->hca_cap.send_wqe_shift;
+ caps->recv_ds_num = out->hca_cap.recv_seg_num;
+ caps->recv_wqe_shift = out->hca_cap.recv_wqe_shift;
+
+ caps->embedded_cpu = 0;
+ caps->ecpf_vport_exists = 0;
+ caps->log_max_current_uc_list = 0;
+ caps->log_max_current_mc_list = 0;
+ caps->log_max_vlan_list = 8;
+ caps->log_max_qp = out->hca_cap.log_max_qp & 0x1f;
+ caps->log_max_mkey = out->hca_cap.log_max_mkey & 0x3f;
+ caps->log_max_pd = out->hca_cap.log_max_pd & 0x1f;
+ caps->log_max_srq = out->hca_cap.log_max_srqs & 0x1f;
+ caps->local_ca_ack_delay = out->hca_cap.local_ca_ack_delay & 0x1f;
+ caps->log_max_mcg = out->hca_cap.log_max_mcg;
+ caps->log_max_mtt = out->hca_cap.log_max_mtt;
+ caps->log_max_tso = out->hca_cap.log_max_tso;
+ caps->hca_core_clock = be32_to_cpu(out->hca_cap.hca_core_clock);
+ caps->max_rwq_indirection_tables =
+ be32_to_cpu(out->hca_cap.max_rwq_indirection_tables);
+ caps->max_rwq_indirection_table_size =
+ be32_to_cpu(out->hca_cap.max_rwq_indirection_table_size);
+ caps->max_qp_mcg = be16_to_cpu(out->hca_cap.max_qp_mcg);
+ caps->max_ra_res_qp = 1 << (out->hca_cap.log_max_ra_res_qp & 0x3f);
+ caps->max_ra_req_qp = 1 << (out->hca_cap.log_max_ra_req_qp & 0x3f);
+ caps->max_srq_wqes = 1 << out->hca_cap.log_max_srq_sz;
+ caps->rx_pkt_len_max = be32_to_cpu(out->hca_cap.rx_pkt_len_max);
+ caps->max_vfs = be16_to_cpu(out->hca_cap.max_vfs);
+ caps->qp_rate_limit_min = be32_to_cpu(out->hca_cap.qp_rate_limit_min);
+ caps->qp_rate_limit_max = be32_to_cpu(out->hca_cap.qp_rate_limit_max);
+
+ caps->msix_enable = 1;
+ caps->msix_base = be16_to_cpu(out->hca_cap.msix_base);
+ caps->msix_num = be16_to_cpu(out->hca_cap.msix_num);
+
+ t16 = be16_to_cpu(out->hca_cap.bf_log_bf_reg_size);
+ if (t16 & 0x8000) {
+ caps->bf_reg_size = 1 << (t16 & 0x1f);
+ caps->bf_regs_per_page = XSC_BF_REGS_PER_PAGE;
+ } else {
+ caps->bf_reg_size = 0;
+ caps->bf_regs_per_page = 0;
+ }
+ caps->min_page_sz = ~(u32)((1 << PAGE_SHIFT) - 1);
+
+ caps->dcbx = 1;
+ caps->qos = 1;
+ caps->ets = 1;
+ caps->dscp = 1;
+ caps->max_tc = out->hca_cap.max_tc;
+ caps->log_max_qp_depth = out->hca_cap.log_max_qp_depth & 0xff;
+ caps->mac_bit = out->hca_cap.mac_bit;
+ caps->lag_logic_port_ofst = out->hca_cap.lag_logic_port_ofst;
+
+ xdev->chip_ver_h = be32_to_cpu(out->hca_cap.chip_ver_h);
+ xdev->chip_ver_m = be32_to_cpu(out->hca_cap.chip_ver_m);
+ xdev->chip_ver_l = be32_to_cpu(out->hca_cap.chip_ver_l);
+ xdev->hotfix_num = be32_to_cpu(out->hca_cap.hotfix_num);
+ xdev->feature_flag = be32_to_cpu(out->hca_cap.feature_flag);
+
+ board_info = xsc_get_board_info(out->hca_cap.board_sn);
+ if (!board_info) {
+ board_info = xsc_alloc_board_info();
+ if (!board_info)
+ return -ENOMEM;
+
+ memcpy(board_info->board_sn, out->hca_cap.board_sn, sizeof(out->hca_cap.board_sn));
+ }
+ xdev->board_info = board_info;
+
+ xdev->regs.tx_db = be64_to_cpu(out->hca_cap.tx_db);
+ xdev->regs.rx_db = be64_to_cpu(out->hca_cap.rx_db);
+ xdev->regs.complete_db = be64_to_cpu(out->hca_cap.complete_db);
+ xdev->regs.complete_reg = be64_to_cpu(out->hca_cap.complete_reg);
+ xdev->regs.event_db = be64_to_cpu(out->hca_cap.event_db);
+
+ xdev->fw_version_major = out->hca_cap.fw_ver.fw_version_major;
+ xdev->fw_version_minor = out->hca_cap.fw_ver.fw_version_minor;
+ xdev->fw_version_patch = be16_to_cpu(out->hca_cap.fw_ver.fw_version_patch);
+ xdev->fw_version_tweak = be32_to_cpu(out->hca_cap.fw_ver.fw_version_tweak);
+ xdev->fw_version_extra_flag = out->hca_cap.fw_ver.fw_version_extra_flag;
+out_out:
+ kfree(out);
+
+ return err;
+}
+
+static int xsc_cmd_query_guid(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd_query_guid_mbox_in in;
+ struct xsc_cmd_query_guid_mbox_out out;
+ int err;
+
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_GUID);
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err)
+ return err;
+
+ if (out.hdr.status)
+ return xsc_cmd_status_to_err(&out.hdr);
+ xdev->board_info->guid = out.guid;
+ xdev->board_info->guid_valid = 1;
+ return 0;
+}
+
+int xsc_query_guid(struct xsc_core_device *xdev)
+{
+ if (xdev->board_info->guid_valid)
+ return 0;
+
+ return xsc_cmd_query_guid(xdev);
+}
+
+static int xsc_cmd_activate_hw_config(struct xsc_core_device *xdev)
+{
+ struct xsc_cmd_activate_hw_config_mbox_in in;
+ struct xsc_cmd_activate_hw_config_mbox_out out;
+ int err = 0;
+
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ACTIVATE_HW_CONFIG);
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err)
+ return err;
+ if (out.hdr.status)
+ return xsc_cmd_status_to_err(&out.hdr);
+ xdev->board_info->hw_config_activated = 1;
+ return 0;
+}
+
+int xsc_activate_hw_config(struct xsc_core_device *xdev)
+{
+ if (xdev->board_info->hw_config_activated)
+ return 0;
+
+ return xsc_cmd_activate_hw_config(xdev);
+}
+
+int xsc_reset_function_resource(struct xsc_core_device *xdev)
+{
+ struct xsc_function_reset_mbox_in in;
+ struct xsc_function_reset_mbox_out out;
+ int err;
+
+ memset(&in, 0, sizeof(in));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_FUNCTION_RESET);
+ in.glb_func_id = cpu_to_be16(xdev->glb_func_id);
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err || out.hdr.status)
+ return -EINVAL;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h
new file mode 100644
index 000000000..a41a1e8d8
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef BOARD_H
+#define BOARD_H
+
+#include "common/xsc_core.h"
+
+void xsc_free_board_info(void);
+int xsc_cmd_query_hca_cap(struct xsc_core_device *xdev,
+ struct xsc_caps *caps);
+int xsc_query_guid(struct xsc_core_device *xdev);
+int xsc_activate_hw_config(struct xsc_core_device *xdev);
+int xsc_reset_function_resource(struct xsc_core_device *xdev);
+
+#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
index a7184f2fc..f07b6dec8 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
@@ -5,6 +5,7 @@
#include "common/xsc_core.h"
#include "common/xsc_driver.h"
+#include "hw.h"
unsigned int xsc_debug_mask;
module_param_named(debug_mask, xsc_debug_mask, uint, 0644);
@@ -214,6 +215,30 @@ static int xsc_hw_setup(struct xsc_core_device *xdev)
goto err_cmd_cleanup;
}
+ err = xsc_cmd_query_hca_cap(xdev, &xdev->caps);
+ if (err) {
+ xsc_core_err(xdev, "Failed to query hca, err=%d\n", err);
+ goto err_cmd_cleanup;
+ }
+
+ err = xsc_query_guid(xdev);
+ if (err) {
+ xsc_core_err(xdev, "failed to query guid, err=%d\n", err);
+ goto err_cmd_cleanup;
+ }
+
+ err = xsc_activate_hw_config(xdev);
+ if (err) {
+ xsc_core_err(xdev, "failed to activate hw config, err=%d\n", err);
+ goto err_cmd_cleanup;
+ }
+
+ err = xsc_reset_function_resource(xdev);
+ if (err) {
+ xsc_core_err(xdev, "Failed to reset function resource\n");
+ goto err_cmd_cleanup;
+ }
+
return 0;
err_cmd_cleanup:
xsc_cmd_cleanup(xdev);
@@ -359,6 +384,7 @@ static int __init xsc_init(void)
static void __exit xsc_fini(void)
{
pci_unregister_driver(&xsc_pci_driver);
+ xsc_free_board_info();
}
module_init(xsc_init);
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 04/16] net-next/yunsilicon: Add qp and cq management
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (2 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 03/16] net-next/yunsilicon: Add hardware setup APIs Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 05/16] net-next/yunsilicon: Add eq and alloc Xin Tian
` (12 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add qp(queue pair) and cq(completion queue) resource management APIs
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 172 +++++++++++++++++-
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +-
drivers/net/ethernet/yunsilicon/xsc/pci/cq.c | 39 ++++
drivers/net/ethernet/yunsilicon/xsc/pci/cq.h | 14 ++
.../net/ethernet/yunsilicon/xsc/pci/main.c | 5 +
drivers/net/ethernet/yunsilicon/xsc/pci/qp.c | 79 ++++++++
drivers/net/ethernet/yunsilicon/xsc/pci/qp.h | 15 ++
7 files changed, 325 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cq.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/qp.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/qp.h
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index a8ac7878b..a68ae708d 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -104,6 +104,10 @@ enum {
XSC_MAX_NAME_LEN = 32,
};
+enum {
+ XSC_MAX_EQ_NAME = 20
+};
+
enum {
XSC_MAX_PORTS = 2,
};
@@ -118,8 +122,160 @@ enum {
XSC_MAX_UUARS = XSC_MAX_UAR_PAGES * XSC_BF_REGS_PER_PAGE,
};
+// alloc.c
+struct xsc_buf_list {
+ void *buf;
+ dma_addr_t map;
+};
+
+struct xsc_buf {
+ struct xsc_buf_list direct;
+ struct xsc_buf_list *page_list;
+ int nbufs;
+ int npages;
+ int page_shift;
+ int size;
+};
+
+struct xsc_frag_buf {
+ struct xsc_buf_list *frags;
+ int npages;
+ int size;
+ u8 page_shift;
+};
+
+struct xsc_frag_buf_ctrl {
+ struct xsc_buf_list *frags;
+ u32 sz_m1;
+ u16 frag_sz_m1;
+ u16 strides_offset;
+ u8 log_sz;
+ u8 log_stride;
+ u8 log_frag_strides;
+};
+
+// xsc_core_qp
+struct xsc_send_wqe_ctrl_seg {
+ __le32 msg_opcode:8;
+ __le32 with_immdt:1;
+ __le32 csum_en:2;
+ __le32 ds_data_num:5;
+ __le32 wqe_id:16;
+ __le32 msg_len;
+ union {
+ __le32 opcode_data;
+ struct {
+ u8 has_pph:1;
+ u8 so_type:1;
+ __le16 so_data_size:14;
+ u8:8;
+ u8 so_hdr_len:8;
+ };
+ struct {
+ __le16 desc_id;
+ __le16 is_last_wqe:1;
+ __le16 dst_qp_id:15;
+ };
+ };
+ __le32 se:1;
+ __le32 ce:1;
+ __le32:30;
+};
+
+struct xsc_wqe_data_seg {
+ union {
+ __le32 in_line:1;
+ struct {
+ __le32:1;
+ __le32 seg_len:31;
+ __le32 mkey;
+ __le64 va;
+ };
+ struct {
+ __le32:1;
+ __le32 len:7;
+ u8 in_line_data[15];
+ };
+ };
+};
+
+struct xsc_core_qp {
+ void (*event)(struct xsc_core_qp *qp, int type);
+ int qpn;
+ atomic_t refcount;
+ struct completion free;
+ int pid;
+ u16 qp_type;
+ u16 eth_queue_type;
+ u16 qp_type_internal;
+ u16 grp_id;
+ u8 mac_id;
+};
+
+struct xsc_qp_table {
+ spinlock_t lock; /* protect radix tree */
+ struct radix_tree_root tree;
+};
+
+// cq
+enum xsc_event {
+ XSC_EVENT_TYPE_COMP = 0x0,
+ XSC_EVENT_TYPE_COMM_EST = 0x02,//mad
+ XSC_EVENT_TYPE_CQ_ERROR = 0x04,
+ XSC_EVENT_TYPE_WQ_CATAS_ERROR = 0x05,
+ XSC_EVENT_TYPE_INTERNAL_ERROR = 0x08,
+ XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR = 0x10,//IBV_EVENT_QP_REQ_ERR
+ XSC_EVENT_TYPE_WQ_ACCESS_ERROR = 0x11,//IBV_EVENT_QP_ACCESS_ERR
+};
+
+struct xsc_core_cq {
+ u32 cqn;
+ int cqe_sz;
+ u64 arm_db;
+ u64 ci_db;
+ struct xsc_core_device *xdev;
+ atomic_t refcount;
+ struct completion free;
+ unsigned int vector;
+ int irqn;
+ u16 dim_us;
+ u16 dim_pkts;
+ void (*comp)(struct xsc_core_cq *cq);
+ void (*event)(struct xsc_core_cq *cq, enum xsc_event);
+ u32 cons_index;
+ unsigned int arm_sn;
+ int pid;
+ u32 reg_next_cid;
+ u32 reg_done_pid;
+ struct xsc_eq *eq;
+};
+
+struct xsc_cq_table {
+ spinlock_t lock; /* protect radix tree */
+ struct radix_tree_root tree;
+};
+
+struct xsc_eq {
+ struct xsc_core_device *dev;
+ struct xsc_cq_table cq_table;
+ u32 doorbell;//offset from bar0/2 space start
+ u32 cons_index;
+ struct xsc_buf buf;
+ int size;
+ unsigned int irqn;
+ u16 eqn;
+ int nent;
+ cpumask_var_t mask;
+ char name[XSC_MAX_EQ_NAME];
+ struct list_head list;
+ int index;
+};
+
struct xsc_dev_resource {
- struct mutex alloc_mutex; /* protect buffer alocation according to numa node */
+ struct xsc_qp_table qp_table;
+ struct xsc_cq_table cq_table;
+
+ struct mutex alloc_mutex; /* protect buffer alocation according to numa node */
};
struct xsc_reg_addr {
@@ -300,4 +456,18 @@ struct xsc_core_device {
u8 fw_version_extra_flag;
};
+int xsc_core_create_resource_common(struct xsc_core_device *xdev,
+ struct xsc_core_qp *qp);
+void xsc_core_destroy_resource_common(struct xsc_core_device *xdev,
+ struct xsc_core_qp *qp);
+
+static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset)
+{
+ if (likely(BITS_PER_LONG == 64 || buf->nbufs == 1))
+ return buf->direct.buf + offset;
+ else
+ return buf->page_list[offset >> PAGE_SHIFT].buf +
+ (offset & (PAGE_SIZE - 1));
+}
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
index fea625d54..9a4a6e02d 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
@@ -6,4 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
-xsc_pci-y := main.o cmdq.o hw.o
+xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c
new file mode 100644
index 000000000..ed0423ef2
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include "common/xsc_core.h"
+#include "cq.h"
+
+void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type)
+{
+ struct xsc_cq_table *table = &xdev->dev_res->cq_table;
+ struct xsc_core_cq *cq;
+
+ spin_lock(&table->lock);
+
+ cq = radix_tree_lookup(&table->tree, cqn);
+ if (cq)
+ atomic_inc(&cq->refcount);
+
+ spin_unlock(&table->lock);
+
+ if (!cq) {
+ xsc_core_warn(xdev, "Async event for bogus CQ 0x%x\n", cqn);
+ return;
+ }
+
+ cq->event(cq, event_type);
+
+ if (atomic_dec_and_test(&cq->refcount))
+ complete(&cq->free);
+}
+
+void xsc_init_cq_table(struct xsc_core_device *xdev)
+{
+ struct xsc_cq_table *table = &xdev->dev_res->cq_table;
+
+ spin_lock_init(&table->lock);
+ INIT_RADIX_TREE(&table->tree, GFP_ATOMIC);
+}
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h
new file mode 100644
index 000000000..b223769e9
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_CQ_H
+#define XSC_CQ_H
+
+#include "common/xsc_core.h"
+
+void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type);
+void xsc_init_cq_table(struct xsc_core_device *xdev);
+
+#endif /* XSC_CQ_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
index f07b6dec8..45f700129 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
@@ -6,6 +6,8 @@
#include "common/xsc_core.h"
#include "common/xsc_driver.h"
#include "hw.h"
+#include "qp.h"
+#include "cq.h"
unsigned int xsc_debug_mask;
module_param_named(debug_mask, xsc_debug_mask, uint, 0644);
@@ -239,6 +241,9 @@ static int xsc_hw_setup(struct xsc_core_device *xdev)
goto err_cmd_cleanup;
}
+ xsc_init_cq_table(xdev);
+ xsc_init_qp_table(xdev);
+
return 0;
err_cmd_cleanup:
xsc_cmd_cleanup(xdev);
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c
new file mode 100644
index 000000000..de58a21b5
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c
@@ -0,0 +1,79 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include <linux/gfp.h>
+#include <linux/time.h>
+#include <linux/export.h>
+#include <linux/kthread.h>
+#include "common/xsc_core.h"
+#include "qp.h"
+
+int xsc_core_create_resource_common(struct xsc_core_device *xdev,
+ struct xsc_core_qp *qp)
+{
+ struct xsc_qp_table *table = &xdev->dev_res->qp_table;
+ int err;
+
+ spin_lock_irq(&table->lock);
+ err = radix_tree_insert(&table->tree, qp->qpn, qp);
+ spin_unlock_irq(&table->lock);
+ if (err)
+ return err;
+
+ atomic_set(&qp->refcount, 1);
+ init_completion(&qp->free);
+ qp->pid = current->pid;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xsc_core_create_resource_common);
+
+void xsc_core_destroy_resource_common(struct xsc_core_device *xdev,
+ struct xsc_core_qp *qp)
+{
+ struct xsc_qp_table *table = &xdev->dev_res->qp_table;
+ unsigned long flags;
+
+ spin_lock_irqsave(&table->lock, flags);
+ radix_tree_delete(&table->tree, qp->qpn);
+ spin_unlock_irqrestore(&table->lock, flags);
+
+ if (atomic_dec_and_test(&qp->refcount))
+ complete(&qp->free);
+ wait_for_completion(&qp->free);
+}
+EXPORT_SYMBOL_GPL(xsc_core_destroy_resource_common);
+
+void xsc_qp_event(struct xsc_core_device *xdev, u32 qpn, int event_type)
+{
+ struct xsc_qp_table *table = &xdev->dev_res->qp_table;
+ struct xsc_core_qp *qp;
+
+ spin_lock(&table->lock);
+
+ qp = radix_tree_lookup(&table->tree, qpn);
+ if (qp)
+ atomic_inc(&qp->refcount);
+
+ spin_unlock(&table->lock);
+
+ if (!qp) {
+ xsc_core_warn(xdev, "Async event for bogus QP 0x%x\n", qpn);
+ return;
+ }
+
+ qp->event(qp, event_type);
+
+ if (atomic_dec_and_test(&qp->refcount))
+ complete(&qp->free);
+}
+
+void xsc_init_qp_table(struct xsc_core_device *xdev)
+{
+ struct xsc_qp_table *table = &xdev->dev_res->qp_table;
+
+ spin_lock_init(&table->lock);
+ INIT_RADIX_TREE(&table->tree, GFP_ATOMIC);
+}
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h
new file mode 100644
index 000000000..baba6c7de
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_QP_H
+#define XSC_QP_H
+
+#include "common/xsc_core.h"
+
+void xsc_init_qp_table(struct xsc_core_device *xdev);
+void xsc_cleanup_qp_table(struct xsc_core_device *xdev);
+void xsc_qp_event(struct xsc_core_device *xdev, u32 qpn, int event_type);
+
+#endif /* XSC_QP_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 05/16] net-next/yunsilicon: Add eq and alloc
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (3 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 04/16] net-next/yunsilicon: Add qp and cq management Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 06/16] net-next/yunsilicon: Add pci irq Xin Tian
` (11 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add eq management and buffer alloc apis
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 40 +-
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +-
.../net/ethernet/yunsilicon/xsc/pci/alloc.c | 129 +++++++
.../net/ethernet/yunsilicon/xsc/pci/alloc.h | 16 +
drivers/net/ethernet/yunsilicon/xsc/pci/eq.c | 345 ++++++++++++++++++
drivers/net/ethernet/yunsilicon/xsc/pci/eq.h | 46 +++
.../net/ethernet/yunsilicon/xsc/pci/main.c | 2 +
7 files changed, 578 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/eq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/eq.h
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index a68ae708d..09199d5a1 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -30,6 +30,10 @@ extern unsigned int xsc_log_level;
#define XSC_MV_HOST_VF_DEV_ID 0x1152
#define XSC_MV_SOC_PF_DEV_ID 0x1153
+#define PAGE_SHIFT_4K 12
+#define PAGE_SIZE_4K (_AC(1, UL) << PAGE_SHIFT_4K)
+#define PAGE_MASK_4K (~(PAGE_SIZE_4K - 1))
+
enum {
XSC_LOG_LEVEL_DBG = 0,
XSC_LOG_LEVEL_INFO = 1,
@@ -108,6 +112,10 @@ enum {
XSC_MAX_EQ_NAME = 20
};
+enum {
+ XSC_MAX_IRQ_NAME = 32
+};
+
enum {
XSC_MAX_PORTS = 2,
};
@@ -154,7 +162,7 @@ struct xsc_frag_buf_ctrl {
u8 log_frag_strides;
};
-// xsc_core_qp
+// qp
struct xsc_send_wqe_ctrl_seg {
__le32 msg_opcode:8;
__le32 with_immdt:1;
@@ -255,6 +263,7 @@ struct xsc_cq_table {
struct radix_tree_root tree;
};
+// eq
struct xsc_eq {
struct xsc_core_device *dev;
struct xsc_cq_table cq_table;
@@ -271,9 +280,30 @@ struct xsc_eq {
int index;
};
+struct xsc_eq_table {
+ void __iomem *update_ci;
+ void __iomem *update_arm_ci;
+ struct list_head comp_eqs_list;
+ struct xsc_eq pages_eq;
+ struct xsc_eq async_eq;
+ struct xsc_eq cmd_eq;
+ int num_comp_vectors;
+ int eq_vec_comp_base;
+ /* protect EQs list
+ */
+ spinlock_t lock;
+};
+
+struct xsc_irq_info {
+ cpumask_var_t mask;
+ char name[XSC_MAX_IRQ_NAME];
+};
+
struct xsc_dev_resource {
struct xsc_qp_table qp_table;
struct xsc_cq_table cq_table;
+ struct xsc_eq_table eq_table;
+ struct xsc_irq_info *irq_info;
struct mutex alloc_mutex; /* protect buffer alocation according to numa node */
};
@@ -431,6 +461,8 @@ struct xsc_core_device {
u8 mac_port;
u16 glb_func_id;
+ u16 msix_vec_base;
+
struct xsc_cmd cmd;
u16 cmdq_ver;
@@ -460,6 +492,7 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev,
struct xsc_core_qp *qp);
void xsc_core_destroy_resource_common(struct xsc_core_device *xdev,
struct xsc_core_qp *qp);
+struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i);
static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset)
{
@@ -470,4 +503,9 @@ static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset)
(offset & (PAGE_SIZE - 1));
}
+static inline bool xsc_fw_is_available(struct xsc_core_device *xdev)
+{
+ return xdev->cmd.cmd_status == XSC_CMD_STATUS_NORMAL;
+}
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
index 9a4a6e02d..667319958 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
@@ -6,5 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
-xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o
-
+xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c
new file mode 100644
index 000000000..f95b7f660
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c
@@ -0,0 +1,129 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/export.h>
+#include <linux/bitmap.h>
+#include <linux/dma-mapping.h>
+#include <linux/vmalloc.h>
+#include "alloc.h"
+
+/* Handling for queue buffers -- we allocate a bunch of memory and
+ * register it in a memory region at HCA virtual address 0. If the
+ * requested size is > max_direct, we split the allocation into
+ * multiple pages, so we don't require too much contiguous memory.
+ */
+
+int xsc_buf_alloc(struct xsc_core_device *xdev, int size, int max_direct,
+ struct xsc_buf *buf)
+{
+ dma_addr_t t;
+
+ buf->size = size;
+ if (size <= max_direct) {
+ buf->nbufs = 1;
+ buf->npages = 1;
+ buf->page_shift = get_order(size) + PAGE_SHIFT;
+ buf->direct.buf = dma_alloc_coherent(&xdev->pdev->dev,
+ size, &t, GFP_KERNEL | __GFP_ZERO);
+ if (!buf->direct.buf)
+ return -ENOMEM;
+
+ buf->direct.map = t;
+
+ while (t & ((1 << buf->page_shift) - 1)) {
+ --buf->page_shift;
+ buf->npages *= 2;
+ }
+ } else {
+ int i;
+
+ buf->direct.buf = NULL;
+ buf->nbufs = (size + PAGE_SIZE - 1) / PAGE_SIZE;
+ buf->npages = buf->nbufs;
+ buf->page_shift = PAGE_SHIFT;
+ buf->page_list = kcalloc(buf->nbufs, sizeof(*buf->page_list),
+ GFP_KERNEL);
+ if (!buf->page_list)
+ return -ENOMEM;
+
+ for (i = 0; i < buf->nbufs; i++) {
+ buf->page_list[i].buf =
+ dma_alloc_coherent(&xdev->pdev->dev, PAGE_SIZE,
+ &t, GFP_KERNEL | __GFP_ZERO);
+ if (!buf->page_list[i].buf)
+ goto err_free;
+
+ buf->page_list[i].map = t;
+ }
+
+ if (BITS_PER_LONG == 64) {
+ struct page **pages;
+
+ pages = kmalloc_array(buf->nbufs, sizeof(*pages), GFP_KERNEL);
+ if (!pages)
+ goto err_free;
+ for (i = 0; i < buf->nbufs; i++) {
+ if (is_vmalloc_addr(buf->page_list[i].buf))
+ pages[i] = vmalloc_to_page(buf->page_list[i].buf);
+ else
+ pages[i] = virt_to_page(buf->page_list[i].buf);
+ }
+ buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL);
+ kfree(pages);
+ if (!buf->direct.buf)
+ goto err_free;
+ }
+ }
+
+ return 0;
+
+err_free:
+ xsc_buf_free(xdev, buf);
+
+ return -ENOMEM;
+}
+EXPORT_SYMBOL_GPL(xsc_buf_alloc);
+
+void xsc_buf_free(struct xsc_core_device *xdev, struct xsc_buf *buf)
+{
+ int i;
+
+ if (buf->nbufs == 1) {
+ dma_free_coherent(&xdev->pdev->dev, buf->size, buf->direct.buf,
+ buf->direct.map);
+ } else {
+ if (BITS_PER_LONG == 64 && buf->direct.buf)
+ vunmap(buf->direct.buf);
+
+ for (i = 0; i < buf->nbufs; i++)
+ if (buf->page_list[i].buf)
+ dma_free_coherent(&xdev->pdev->dev, PAGE_SIZE,
+ buf->page_list[i].buf,
+ buf->page_list[i].map);
+ kfree(buf->page_list);
+ }
+}
+EXPORT_SYMBOL_GPL(xsc_buf_free);
+
+void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, int npages)
+{
+ u64 addr;
+ int i;
+ int shift = PAGE_SHIFT - PAGE_SHIFT_4K;
+ int mask = (1 << shift) - 1;
+
+ for (i = 0; i < npages; i++) {
+ if (buf->nbufs == 1)
+ addr = buf->direct.map + (i << PAGE_SHIFT_4K);
+ else
+ addr = buf->page_list[i >> shift].map + ((i & mask) << PAGE_SHIFT_4K);
+
+ pas[i] = cpu_to_be64(addr);
+ }
+}
+EXPORT_SYMBOL_GPL(xsc_fill_page_array);
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h
new file mode 100644
index 000000000..a53f68eb1
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_ALLOC_H
+#define XSC_ALLOC_H
+
+#include "common/xsc_core.h"
+
+int xsc_buf_alloc(struct xsc_core_device *xdev, int size, int max_direct,
+ struct xsc_buf *buf);
+void xsc_buf_free(struct xsc_core_device *xdev, struct xsc_buf *buf);
+void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, int npages);
+
+#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c
new file mode 100644
index 000000000..c60a1323e
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c
@@ -0,0 +1,345 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include "common/xsc_driver.h"
+#include "common/xsc_core.h"
+#include "qp.h"
+#include "alloc.h"
+#include "eq.h"
+
+enum {
+ XSC_EQE_SIZE = sizeof(struct xsc_eqe),
+ XSC_EQE_OWNER_INIT_VAL = 0x1,
+};
+
+enum {
+ XSC_NUM_SPARE_EQE = 0x80,
+ XSC_NUM_ASYNC_EQE = 0x100,
+};
+
+static int xsc_cmd_destroy_eq(struct xsc_core_device *xdev, u32 eqn)
+{
+ struct xsc_destroy_eq_mbox_in in;
+ struct xsc_destroy_eq_mbox_out out;
+ int err;
+
+ memset(&in, 0, sizeof(in));
+ memset(&out, 0, sizeof(out));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_EQ);
+ in.eqn = cpu_to_be32(eqn);
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (!err)
+ goto ex;
+
+ if (out.hdr.status)
+ err = xsc_cmd_status_to_err(&out.hdr);
+
+ex:
+ return err;
+}
+
+static struct xsc_eqe *get_eqe(struct xsc_eq *eq, u32 entry)
+{
+ return xsc_buf_offset(&eq->buf, entry * XSC_EQE_SIZE);
+}
+
+static struct xsc_eqe *next_eqe_sw(struct xsc_eq *eq)
+{
+ struct xsc_eqe *eqe = get_eqe(eq, eq->cons_index & (eq->nent - 1));
+
+ return ((eqe->owner & 1) ^ !!(eq->cons_index & eq->nent)) ? NULL : eqe;
+}
+
+static void eq_update_ci(struct xsc_eq *eq, int arm)
+{
+ union xsc_eq_doorbell db;
+
+ db.val = 0;
+ db.arm = !!arm;
+ db.eq_next_cid = eq->cons_index;
+ db.eq_id = eq->eqn;
+ writel(db.val, REG_ADDR(eq->dev, eq->doorbell));
+ /* We still want ordering, just not swabbing, so add a barrier */
+ mb();
+}
+
+static void xsc_cq_completion(struct xsc_core_device *xdev, u32 cqn)
+{
+ struct xsc_core_cq *cq;
+ struct xsc_cq_table *table = &xdev->dev_res->cq_table;
+
+ rcu_read_lock();
+ cq = radix_tree_lookup(&table->tree, cqn);
+ if (likely(cq))
+ atomic_inc(&cq->refcount);
+ rcu_read_unlock();
+
+ if (!cq) {
+ xsc_core_err(xdev, "Completion event for bogus CQ, cqn=%d\n", cqn);
+ return;
+ }
+
+ ++cq->arm_sn;
+
+ if (!cq->comp)
+ xsc_core_err(xdev, "cq->comp is NULL\n");
+ else
+ cq->comp(cq);
+
+ if (atomic_dec_and_test(&cq->refcount))
+ complete(&cq->free);
+}
+
+static void xsc_eq_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type)
+{
+ struct xsc_core_cq *cq;
+ struct xsc_cq_table *table = &xdev->dev_res->cq_table;
+
+ spin_lock(&table->lock);
+ cq = radix_tree_lookup(&table->tree, cqn);
+ if (likely(cq))
+ atomic_inc(&cq->refcount);
+ spin_unlock(&table->lock);
+
+ if (unlikely(!cq)) {
+ xsc_core_err(xdev, "Async event for bogus CQ, cqn=%d\n", cqn);
+ return;
+ }
+
+ cq->event(cq, event_type);
+
+ if (atomic_dec_and_test(&cq->refcount))
+ complete(&cq->free);
+}
+
+static int xsc_eq_int(struct xsc_core_device *xdev, struct xsc_eq *eq)
+{
+ struct xsc_eqe *eqe;
+ int eqes_found = 0;
+ int set_ci = 0;
+ u32 cqn, qpn, queue_id;
+
+ while ((eqe = next_eqe_sw(eq))) {
+ /* Make sure we read EQ entry contents after we've
+ * checked the ownership bit.
+ */
+ rmb();
+ switch (eqe->type) {
+ case XSC_EVENT_TYPE_COMP:
+ case XSC_EVENT_TYPE_INTERNAL_ERROR:
+ /* eqe is changing */
+ queue_id = eqe->queue_id;
+ cqn = queue_id;
+ xsc_cq_completion(xdev, cqn);
+ break;
+
+ case XSC_EVENT_TYPE_CQ_ERROR:
+ queue_id = eqe->queue_id;
+ cqn = queue_id;
+ xsc_eq_cq_event(xdev, cqn, eqe->type);
+ break;
+ case XSC_EVENT_TYPE_WQ_CATAS_ERROR:
+ case XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR:
+ case XSC_EVENT_TYPE_WQ_ACCESS_ERROR:
+ queue_id = eqe->queue_id;
+ qpn = queue_id;
+ xsc_qp_event(xdev, qpn, eqe->type);
+ break;
+ default:
+ xsc_core_warn(xdev, "Unhandle event %d on EQ %d\n", eqe->type, eq->eqn);
+ break;
+ }
+
+ ++eq->cons_index;
+ eqes_found = 1;
+ ++set_ci;
+
+ /* The HCA will think the queue has overflowed if we
+ * don't tell it we've been processing events. We
+ * create our EQs with XSC_NUM_SPARE_EQE extra
+ * entries, so we must update our consumer index at
+ * least that often.
+ */
+ if (unlikely(set_ci >= XSC_NUM_SPARE_EQE)) {
+ xsc_core_dbg(xdev, "EQ%d eq_num=%d qpn=%d, db_noarm\n",
+ eq->eqn, set_ci, eqe->queue_id);
+ eq_update_ci(eq, 0);
+ set_ci = 0;
+ }
+ }
+
+ eq_update_ci(eq, 1);
+#ifdef XSC_DEBUG
+ xsc_core_dbg(xdev, "EQ%d eq_num=%d qpn=%d, db_arm=%d\n",
+ eq->eqn, set_ci, (eqe ? eqe->queue_id : 0), eq_db_arm);
+#endif
+
+ return eqes_found;
+}
+
+static irqreturn_t xsc_msix_handler(int irq, void *eq_ptr)
+{
+ struct xsc_eq *eq = eq_ptr;
+ struct xsc_core_device *xdev = eq->dev;
+#ifdef XSC_DEBUG
+ xsc_core_dbg(xdev, "EQ %d hint irq: %d\n", eq->eqn, irq);
+#endif
+ xsc_eq_int(xdev, eq);
+
+ /* MSI-X vectors always belong to us */
+ return IRQ_HANDLED;
+}
+
+static void init_eq_buf(struct xsc_eq *eq)
+{
+ struct xsc_eqe *eqe;
+ int i;
+
+ for (i = 0; i < eq->nent; i++) {
+ eqe = get_eqe(eq, i);
+ eqe->owner = XSC_EQE_OWNER_INIT_VAL;
+ }
+}
+
+int xsc_create_map_eq(struct xsc_core_device *xdev, struct xsc_eq *eq, u8 vecidx,
+ int nent, const char *name)
+{
+ struct xsc_dev_resource *dev_res = xdev->dev_res;
+ u16 msix_vec_offset = xdev->msix_vec_base + vecidx;
+ struct xsc_create_eq_mbox_in *in;
+ struct xsc_create_eq_mbox_out out;
+ int err;
+ int inlen;
+ int hw_npages;
+
+ eq->nent = roundup_pow_of_two(roundup(nent, XSC_NUM_SPARE_EQE));
+ err = xsc_buf_alloc(xdev, eq->nent * XSC_EQE_SIZE, PAGE_SIZE, &eq->buf);
+ if (err)
+ return err;
+
+ init_eq_buf(eq);
+
+ hw_npages = DIV_ROUND_UP(eq->nent * XSC_EQE_SIZE, PAGE_SIZE_4K);
+ inlen = sizeof(*in) + sizeof(in->pas[0]) * hw_npages;
+ in = kvzalloc(inlen, GFP_KERNEL);
+ if (!in) {
+ err = -ENOMEM;
+ goto err_buf;
+ }
+ memset(&out, 0, sizeof(out));
+
+ xsc_fill_page_array(&eq->buf, in->pas, hw_npages);
+
+ in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_EQ);
+ in->ctx.log_eq_sz = ilog2(eq->nent);
+ in->ctx.vecidx = cpu_to_be16(msix_vec_offset);
+ in->ctx.pa_num = cpu_to_be16(hw_npages);
+ in->ctx.glb_func_id = cpu_to_be16(xdev->glb_func_id);
+ in->ctx.is_async_eq = (vecidx == XSC_EQ_VEC_ASYNC ? 1 : 0);
+
+ err = xsc_cmd_exec(xdev, in, inlen, &out, sizeof(out));
+ if (err)
+ goto err_in;
+
+ if (out.hdr.status) {
+ err = -ENOSPC;
+ goto err_in;
+ }
+
+ snprintf(dev_res->irq_info[vecidx].name, XSC_MAX_IRQ_NAME, "%s@pci:%s",
+ name, pci_name(xdev->pdev));
+
+ eq->eqn = be32_to_cpu(out.eqn);
+ eq->irqn = pci_irq_vector(xdev->pdev, vecidx);
+ eq->dev = xdev;
+ eq->doorbell = xdev->regs.event_db;
+ eq->index = vecidx;
+ xsc_core_dbg(xdev, "msix%d request vector%d eq%d irq%d\n",
+ vecidx, msix_vec_offset, eq->eqn, eq->irqn);
+
+ err = request_irq(eq->irqn, xsc_msix_handler, 0,
+ dev_res->irq_info[vecidx].name, eq);
+ if (err)
+ goto err_eq;
+
+ /* EQs are created in ARMED state
+ */
+ eq_update_ci(eq, 1);
+ kvfree(in);
+ return 0;
+
+err_eq:
+ xsc_cmd_destroy_eq(xdev, eq->eqn);
+
+err_in:
+ kvfree(in);
+
+err_buf:
+ xsc_buf_free(xdev, &eq->buf);
+ return err;
+}
+
+int xsc_destroy_unmap_eq(struct xsc_core_device *xdev, struct xsc_eq *eq)
+{
+ int err;
+
+ if (!xsc_fw_is_available(xdev))
+ return 0;
+
+ free_irq(eq->irqn, eq);
+ err = xsc_cmd_destroy_eq(xdev, eq->eqn);
+ if (err)
+ xsc_core_warn(xdev, "failed to destroy a previously created eq: eqn %d\n",
+ eq->eqn);
+ xsc_buf_free(xdev, &eq->buf);
+
+ return err;
+}
+
+void xsc_eq_init(struct xsc_core_device *xdev)
+{
+ spin_lock_init(&xdev->dev_res->eq_table.lock);
+}
+
+int xsc_start_eqs(struct xsc_core_device *xdev)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ int err;
+
+ err = xsc_create_map_eq(xdev, &table->async_eq, XSC_EQ_VEC_ASYNC,
+ XSC_NUM_ASYNC_EQE, "xsc_async_eq");
+ if (err)
+ xsc_core_warn(xdev, "failed to create async EQ %d\n", err);
+
+ return err;
+}
+
+void xsc_stop_eqs(struct xsc_core_device *xdev)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+
+ xsc_destroy_unmap_eq(xdev, &table->async_eq);
+}
+
+struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ struct xsc_eq *eq, *n;
+ struct xsc_eq *eq_ret = NULL;
+
+ spin_lock(&table->lock);
+ list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
+ if (eq->index == i) {
+ eq_ret = eq;
+ break;
+ }
+ }
+ spin_unlock(&table->lock);
+
+ return eq_ret;
+}
+EXPORT_SYMBOL(xsc_core_eq_get);
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h
new file mode 100644
index 000000000..56ff2e9df
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_EQ_H
+#define XSC_EQ_H
+
+#include "common/xsc_core.h"
+
+enum {
+ XSC_EQ_VEC_ASYNC = 0,
+ XSC_VEC_CMD = 1,
+ XSC_VEC_CMD_EVENT = 2,
+ XSC_DMA_READ_DONE_VEC = 3,
+ XSC_EQ_VEC_COMP_BASE,
+};
+
+struct xsc_eqe {
+ u8 type;
+ u8 sub_type;
+ __le16 queue_id:15;
+ u8 rsv1:1;
+ u8 err_code;
+ u8 rsvd[2];
+ u8 rsv2:7;
+ u8 owner:1;
+};
+
+union xsc_eq_doorbell {
+ struct{
+ u32 eq_next_cid : 11;
+ u32 eq_id : 11;
+ u32 arm : 1;
+ };
+ u32 val;
+};
+
+int xsc_create_map_eq(struct xsc_core_device *xdev, struct xsc_eq *eq, u8 vecidx,
+ int nent, const char *name);
+int xsc_destroy_unmap_eq(struct xsc_core_device *xdev, struct xsc_eq *eq);
+void xsc_eq_init(struct xsc_core_device *xdev);
+int xsc_start_eqs(struct xsc_core_device *xdev);
+void xsc_stop_eqs(struct xsc_core_device *xdev);
+
+#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
index 45f700129..b89c13002 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
@@ -8,6 +8,7 @@
#include "hw.h"
#include "qp.h"
#include "cq.h"
+#include "eq.h"
unsigned int xsc_debug_mask;
module_param_named(debug_mask, xsc_debug_mask, uint, 0644);
@@ -243,6 +244,7 @@ static int xsc_hw_setup(struct xsc_core_device *xdev)
xsc_init_cq_table(xdev);
xsc_init_qp_table(xdev);
+ xsc_eq_init(xdev);
return 0;
err_cmd_cleanup:
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 06/16] net-next/yunsilicon: Add pci irq
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (4 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 05/16] net-next/yunsilicon: Add eq and alloc Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 07/16] net-next/yunsilicon: Device and interface management Xin Tian
` (10 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Implement interrupt management and event handling
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 6 +
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +-
.../net/ethernet/yunsilicon/xsc/pci/main.c | 11 +-
.../net/ethernet/yunsilicon/xsc/pci/pci_irq.c | 427 ++++++++++++++++++
.../net/ethernet/yunsilicon/xsc/pci/pci_irq.h | 14 +
5 files changed, 458 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index 09199d5a1..3541be638 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -453,8 +453,11 @@ struct xsc_core_device {
struct pci_dev *pdev;
struct device *device;
struct xsc_priv priv;
+ void *eth_priv;
struct xsc_dev_resource *dev_res;
+ void (*event_handler)(void *adapter);
+
void __iomem *bar;
int bar_num;
@@ -486,6 +489,7 @@ struct xsc_core_device {
u16 fw_version_patch;
u32 fw_version_tweak;
u8 fw_version_extra_flag;
+ cpumask_var_t xps_cpumask;
};
int xsc_core_create_resource_common(struct xsc_core_device *xdev,
@@ -493,6 +497,8 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev,
void xsc_core_destroy_resource_common(struct xsc_core_device *xdev,
struct xsc_core_qp *qp);
struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i);
+int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, int *eqn,
+ unsigned int *irqn);
static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset)
{
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
index 667319958..3525d1c74 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
@@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
-xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o
+xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
index b89c13002..ea9bda0e6 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
@@ -9,6 +9,7 @@
#include "qp.h"
#include "cq.h"
#include "eq.h"
+#include "pci_irq.h"
unsigned int xsc_debug_mask;
module_param_named(debug_mask, xsc_debug_mask, uint, 0644);
@@ -282,10 +283,18 @@ static int xsc_load(struct xsc_core_device *xdev)
goto out;
}
+ err = xsc_irq_eq_create(xdev);
+ if (err) {
+ xsc_core_err(xdev, "xsc_irq_eq_create failed %d\n", err);
+ goto err_irq_eq_create;
+ }
+
set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state);
mutex_unlock(&xdev->intf_state_mutex);
return 0;
+err_irq_eq_create:
+ xsc_hw_cleanup(xdev);
out:
mutex_unlock(&xdev->intf_state_mutex);
return err;
@@ -302,7 +311,7 @@ static int xsc_unload(struct xsc_core_device *xdev)
}
clear_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state);
-
+ xsc_irq_eq_destroy(xdev);
xsc_hw_cleanup(xdev);
out:
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c
new file mode 100644
index 000000000..110a33c2b
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c
@@ -0,0 +1,427 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include <linux/irqdomain.h>
+#include <linux/msi.h>
+#include <linux/interrupt.h>
+#include <linux/notifier.h>
+#include <linux/module.h>
+#ifdef CONFIG_RFS_ACCEL
+#include <linux/cpu_rmap.h>
+#endif
+#include "common/xsc_driver.h"
+#include "common/xsc_core.h"
+#include "eq.h"
+#include "pci_irq.h"
+
+enum {
+ XSC_COMP_EQ_SIZE = 1024,
+};
+
+enum xsc_eq_type {
+ XSC_EQ_TYPE_COMP,
+ XSC_EQ_TYPE_ASYNC,
+#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
+ XSC_EQ_TYPE_PF,
+#endif
+};
+
+struct xsc_irq {
+ struct atomic_notifier_head nh;
+ cpumask_var_t mask;
+ char name[XSC_MAX_IRQ_NAME];
+};
+
+struct xsc_irq_table {
+ struct xsc_irq *irq;
+ int nvec;
+#ifdef CONFIG_RFS_ACCEL
+ struct cpu_rmap *rmap;
+#endif
+};
+
+struct xsc_msix_resource *g_msix_xres;
+
+static void xsc_free_irq(struct xsc_core_device *xdev, unsigned int vector)
+{
+ unsigned int irqn = 0;
+
+ irqn = pci_irq_vector(xdev->pdev, vector);
+ disable_irq(irqn);
+
+ if (xsc_fw_is_available(xdev))
+ free_irq(irqn, xdev);
+}
+
+static int set_comp_irq_affinity_hint(struct xsc_core_device *xdev, int i)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ int vecidx = table->eq_vec_comp_base + i;
+ struct xsc_eq *eq = xsc_core_eq_get(xdev, i);
+ unsigned int irqn;
+ int ret;
+
+ irqn = pci_irq_vector(xdev->pdev, vecidx);
+ if (!zalloc_cpumask_var(&eq->mask, GFP_KERNEL)) {
+ xsc_core_err(xdev, "zalloc_cpumask_var rx cpumask failed");
+ return -ENOMEM;
+ }
+
+ if (!zalloc_cpumask_var(&xdev->xps_cpumask, GFP_KERNEL)) {
+ xsc_core_err(xdev, "zalloc_cpumask_var tx cpumask failed");
+ return -ENOMEM;
+ }
+
+ cpumask_set_cpu(cpumask_local_spread(i, xdev->priv.numa_node),
+ xdev->xps_cpumask);
+ ret = irq_set_affinity_hint(irqn, eq->mask);
+
+ return ret;
+}
+
+static void clear_comp_irq_affinity_hint(struct xsc_core_device *xdev, int i)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ int vecidx = table->eq_vec_comp_base + i;
+ struct xsc_eq *eq = xsc_core_eq_get(xdev, i);
+ int irqn;
+
+ irqn = pci_irq_vector(xdev->pdev, vecidx);
+ irq_set_affinity_hint(irqn, NULL);
+ free_cpumask_var(eq->mask);
+}
+
+static int set_comp_irq_affinity_hints(struct xsc_core_device *xdev)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ int nvec = table->num_comp_vectors;
+ int err;
+ int i;
+
+ for (i = 0; i < nvec; i++) {
+ err = set_comp_irq_affinity_hint(xdev, i);
+ if (err)
+ goto err_out;
+ }
+
+ return 0;
+
+err_out:
+ for (i--; i >= 0; i--)
+ clear_comp_irq_affinity_hint(xdev, i);
+ free_cpumask_var(xdev->xps_cpumask);
+
+ return err;
+}
+
+static void clear_comp_irq_affinity_hints(struct xsc_core_device *xdev)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ int nvec = table->num_comp_vectors;
+ int i;
+
+ for (i = 0; i < nvec; i++)
+ clear_comp_irq_affinity_hint(xdev, i);
+ free_cpumask_var(xdev->xps_cpumask);
+}
+
+static int xsc_alloc_irq_vectors(struct xsc_core_device *xdev)
+{
+ struct xsc_dev_resource *dev_res = xdev->dev_res;
+ struct xsc_eq_table *table = &dev_res->eq_table;
+ int nvec = xdev->caps.msix_num;
+ int nvec_base;
+ int err;
+
+ nvec_base = XSC_EQ_VEC_COMP_BASE;
+ if (nvec <= nvec_base) {
+ xsc_core_warn(xdev, "failed to alloc irq vector(%d)\n", nvec);
+ return -ENOMEM;
+ }
+
+ dev_res->irq_info = kcalloc(nvec, sizeof(*dev_res->irq_info), GFP_KERNEL);
+ if (!dev_res->irq_info)
+ return -ENOMEM;
+
+ nvec = pci_alloc_irq_vectors(xdev->pdev, nvec_base + 1, nvec, PCI_IRQ_MSIX);
+ if (nvec < 0) {
+ err = nvec;
+ goto err_free_irq_info;
+ }
+
+ table->eq_vec_comp_base = nvec_base;
+ table->num_comp_vectors = nvec - nvec_base;
+ xdev->msix_vec_base = xdev->caps.msix_base;
+ xsc_core_info(xdev,
+ "alloc msix_vec_num=%d, comp_num=%d, max_msix_num=%d, msix_vec_base=%d\n",
+ nvec, table->num_comp_vectors, xdev->caps.msix_num, xdev->msix_vec_base);
+
+ return 0;
+
+err_free_irq_info:
+ pci_free_irq_vectors(xdev->pdev);
+ kfree(dev_res->irq_info);
+ return err;
+}
+
+static void xsc_free_irq_vectors(struct xsc_core_device *xdev)
+{
+ struct xsc_dev_resource *dev_res = xdev->dev_res;
+
+ if (!xsc_fw_is_available(xdev))
+ return;
+
+ pci_free_irq_vectors(xdev->pdev);
+ kfree(dev_res->irq_info);
+}
+
+int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, int *eqn,
+ unsigned int *irqn)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ struct xsc_eq *eq, *n;
+ int err = -ENOENT;
+
+ if (!xdev->caps.msix_enable)
+ return 0;
+
+ spin_lock(&table->lock);
+ list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
+ if (eq->index == vector) {
+ *eqn = eq->eqn;
+ *irqn = eq->irqn;
+ err = 0;
+ break;
+ }
+ }
+ spin_unlock(&table->lock);
+
+ return err;
+}
+EXPORT_SYMBOL(xsc_core_vector2eqn);
+
+static void free_comp_eqs(struct xsc_core_device *xdev)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ struct xsc_eq *eq, *n;
+
+ spin_lock(&table->lock);
+ list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
+ list_del(&eq->list);
+ spin_unlock(&table->lock);
+ if (xsc_destroy_unmap_eq(xdev, eq))
+ xsc_core_warn(xdev, "failed to destroy EQ 0x%x\n", eq->eqn);
+ kfree(eq);
+ spin_lock(&table->lock);
+ }
+ spin_unlock(&table->lock);
+}
+
+static int alloc_comp_eqs(struct xsc_core_device *xdev)
+{
+ struct xsc_eq_table *table = &xdev->dev_res->eq_table;
+ char name[XSC_MAX_IRQ_NAME];
+ struct xsc_eq *eq;
+ int ncomp_vec;
+ int nent;
+ int err;
+ int i;
+
+ INIT_LIST_HEAD(&table->comp_eqs_list);
+ ncomp_vec = table->num_comp_vectors;
+ nent = XSC_COMP_EQ_SIZE;
+
+ for (i = 0; i < ncomp_vec; i++) {
+ eq = kzalloc(sizeof(*eq), GFP_KERNEL);
+ if (!eq) {
+ err = -ENOMEM;
+ goto clean;
+ }
+
+ snprintf(name, XSC_MAX_IRQ_NAME, "xsc_comp%d", i);
+ err = xsc_create_map_eq(xdev, eq,
+ i + table->eq_vec_comp_base, nent, name);
+ if (err) {
+ kfree(eq);
+ goto clean;
+ }
+
+ eq->index = i;
+ spin_lock(&table->lock);
+ list_add_tail(&eq->list, &table->comp_eqs_list);
+ spin_unlock(&table->lock);
+ }
+
+ return 0;
+
+clean:
+ free_comp_eqs(xdev);
+ return err;
+}
+
+static irqreturn_t xsc_cmd_handler(int irq, void *arg)
+{
+ struct xsc_core_device *xdev = (struct xsc_core_device *)arg;
+ int err;
+
+ disable_irq_nosync(xdev->cmd.irqn);
+ err = xsc_cmd_err_handler(xdev);
+ if (!err)
+ xsc_cmd_resp_handler(xdev);
+ enable_irq(xdev->cmd.irqn);
+
+ return IRQ_HANDLED;
+}
+
+static int xsc_request_irq_for_cmdq(struct xsc_core_device *xdev, u8 vecidx)
+{
+ struct xsc_dev_resource *dev_res = xdev->dev_res;
+
+ writel(xdev->msix_vec_base + vecidx, REG_ADDR(xdev, xdev->cmd.reg.msix_vec_addr));
+
+ snprintf(dev_res->irq_info[vecidx].name, XSC_MAX_IRQ_NAME, "%s@pci:%s",
+ "xsc_cmd", pci_name(xdev->pdev));
+ xdev->cmd.irqn = pci_irq_vector(xdev->pdev, vecidx);
+ return request_irq(xdev->cmd.irqn, xsc_cmd_handler, 0,
+ dev_res->irq_info[vecidx].name, xdev);
+}
+
+static void xsc_free_irq_for_cmdq(struct xsc_core_device *xdev)
+{
+ xsc_free_irq(xdev, XSC_VEC_CMD);
+}
+
+static irqreturn_t xsc_event_handler(int irq, void *arg)
+{
+ struct xsc_core_device *xdev = (struct xsc_core_device *)arg;
+
+ xsc_core_dbg(xdev, "cmd event hint irq: %d\n", irq);
+
+ if (!xdev->eth_priv)
+ return IRQ_NONE;
+
+ if (!xdev->event_handler)
+ return IRQ_NONE;
+
+ xdev->event_handler(xdev->eth_priv);
+
+ return IRQ_HANDLED;
+}
+
+static int xsc_request_irq_for_event(struct xsc_core_device *xdev)
+{
+ struct xsc_dev_resource *dev_res = xdev->dev_res;
+
+ snprintf(dev_res->irq_info[XSC_VEC_CMD_EVENT].name, XSC_MAX_IRQ_NAME, "%s@pci:%s",
+ "xsc_eth_event", pci_name(xdev->pdev));
+ return request_irq(pci_irq_vector(xdev->pdev, XSC_VEC_CMD_EVENT), xsc_event_handler, 0,
+ dev_res->irq_info[XSC_VEC_CMD_EVENT].name, xdev);
+}
+
+static void xsc_free_irq_for_event(struct xsc_core_device *xdev)
+{
+ xsc_free_irq(xdev, XSC_VEC_CMD_EVENT);
+}
+
+static int xsc_cmd_enable_msix(struct xsc_core_device *xdev)
+{
+ struct xsc_msix_table_info_mbox_in in;
+ struct xsc_msix_table_info_mbox_out out;
+ int err;
+
+ memset(&in, 0, sizeof(in));
+ memset(&out, 0, sizeof(out));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ENABLE_MSIX);
+
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err) {
+ xsc_core_err(xdev, "xsc_cmd_exec enable msix failed %d\n", err);
+ return err;
+ }
+
+ return 0;
+}
+
+int xsc_irq_eq_create(struct xsc_core_device *xdev)
+{
+ int err;
+
+ if (xdev->caps.msix_enable == 0)
+ return 0;
+
+ err = xsc_alloc_irq_vectors(xdev);
+ if (err) {
+ xsc_core_err(xdev, "enable msix failed, err=%d\n", err);
+ goto out;
+ }
+
+ err = xsc_start_eqs(xdev);
+ if (err) {
+ xsc_core_err(xdev, "failed to start EQs, err=%d\n", err);
+ goto err_free_irq_vectors;
+ }
+
+ err = alloc_comp_eqs(xdev);
+ if (err) {
+ xsc_core_err(xdev, "failed to alloc comp EQs, err=%d\n", err);
+ goto err_stop_eqs;
+ }
+
+ err = xsc_request_irq_for_cmdq(xdev, XSC_VEC_CMD);
+ if (err) {
+ xsc_core_err(xdev, "failed to request irq for cmdq, err=%d\n", err);
+ goto err_free_comp_eqs;
+ }
+
+ err = xsc_request_irq_for_event(xdev);
+ if (err) {
+ xsc_core_err(xdev, "failed to request irq for event, err=%d\n", err);
+ goto err_free_irq_cmdq;
+ }
+
+ err = set_comp_irq_affinity_hints(xdev);
+ if (err) {
+ xsc_core_err(xdev, "failed to alloc affinity hint cpumask, err=%d\n", err);
+ goto err_free_irq_evnt;
+ }
+
+ xsc_cmd_use_events(xdev);
+ err = xsc_cmd_enable_msix(xdev);
+ if (err) {
+ xsc_core_err(xdev, "xsc_cmd_enable_msix failed %d.\n", err);
+ xsc_cmd_use_polling(xdev);
+ goto err_free_irq_evnt;
+ }
+ return 0;
+
+err_free_irq_evnt:
+ xsc_free_irq_for_event(xdev);
+err_free_irq_cmdq:
+ xsc_free_irq_for_cmdq(xdev);
+err_free_comp_eqs:
+ free_comp_eqs(xdev);
+err_stop_eqs:
+ xsc_stop_eqs(xdev);
+err_free_irq_vectors:
+ xsc_free_irq_vectors(xdev);
+out:
+ return err;
+}
+
+int xsc_irq_eq_destroy(struct xsc_core_device *xdev)
+{
+ if (xdev->caps.msix_enable == 0)
+ return 0;
+
+ xsc_stop_eqs(xdev);
+ clear_comp_irq_affinity_hints(xdev);
+ free_comp_eqs(xdev);
+
+ xsc_free_irq_for_event(xdev);
+ xsc_free_irq_for_cmdq(xdev);
+ xsc_free_irq_vectors(xdev);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h
new file mode 100644
index 000000000..c3665573c
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_PCI_IRQ_H
+#define XSC_PCI_IRQ_H
+
+#include "common/xsc_core.h"
+
+int xsc_irq_eq_create(struct xsc_core_device *xdev);
+int xsc_irq_eq_destroy(struct xsc_core_device *xdev);
+
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 07/16] net-next/yunsilicon: Device and interface management
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (5 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 06/16] net-next/yunsilicon: Add pci irq Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 08/16] net-next/yunsilicon: Add ethernet interface Xin Tian
` (9 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
The xsc device supports both Ethernet and RDMA interfaces.
This patch provides a set of APIs to implement the registration
of new interfaces and handle the interfaces during device
attach/detach or add/remove events.
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 59 +++-
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +-
.../net/ethernet/yunsilicon/xsc/pci/intf.c | 279 ++++++++++++++++++
.../net/ethernet/yunsilicon/xsc/pci/intf.h | 22 ++
.../net/ethernet/yunsilicon/xsc/pci/main.c | 16 +
5 files changed, 373 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/intf.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/intf.h
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index 3541be638..fc2d3d01b 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -294,11 +294,60 @@ struct xsc_eq_table {
spinlock_t lock;
};
+// irq
struct xsc_irq_info {
cpumask_var_t mask;
char name[XSC_MAX_IRQ_NAME];
};
+// intf
+enum xsc_dev_event {
+ XSC_DEV_EVENT_SYS_ERROR,
+ XSC_DEV_EVENT_PORT_UP,
+ XSC_DEV_EVENT_PORT_DOWN,
+ XSC_DEV_EVENT_PORT_INITIALIZED,
+ XSC_DEV_EVENT_LID_CHANGE,
+ XSC_DEV_EVENT_PKEY_CHANGE,
+ XSC_DEV_EVENT_GUID_CHANGE,
+ XSC_DEV_EVENT_CLIENT_REREG,
+};
+
+enum {
+ XSC_INTERFACE_ADDED,
+ XSC_INTERFACE_ATTACHED,
+};
+
+enum xsc_interface_state {
+ XSC_INTERFACE_STATE_UP = BIT(0),
+ XSC_INTERFACE_STATE_TEARDOWN = BIT(1),
+};
+
+enum {
+ XSC_INTERFACE_PROTOCOL_IB = 0,
+ XSC_INTERFACE_PROTOCOL_ETH = 1,
+};
+
+struct xsc_interface {
+ struct list_head list;
+ int protocol;
+
+ void *(*add)(struct xsc_core_device *xdev);
+ void (*remove)(struct xsc_core_device *xdev, void *context);
+ int (*attach)(struct xsc_core_device *xdev, void *context);
+ void (*detach)(struct xsc_core_device *xdev, void *context);
+ void (*event)(struct xsc_core_device *xdev, void *context,
+ enum xsc_dev_event event, unsigned long param);
+ void *(*get_dev)(void *context);
+};
+
+struct xsc_device_context {
+ struct list_head list;
+ struct xsc_interface *intf;
+ void *context;
+ unsigned long state;
+};
+
+// xsc_core
struct xsc_dev_resource {
struct xsc_qp_table qp_table;
struct xsc_cq_table cq_table;
@@ -436,11 +485,6 @@ enum xsc_pci_state {
XSC_PCI_STATE_ENABLED,
};
-enum xsc_interface_state {
- XSC_INTERFACE_STATE_UP = BIT(0),
- XSC_INTERFACE_STATE_TEARDOWN = BIT(1),
-};
-
struct xsc_priv {
char name[XSC_MAX_NAME_LEN];
struct list_head dev_list;
@@ -456,6 +500,8 @@ struct xsc_core_device {
void *eth_priv;
struct xsc_dev_resource *dev_res;
+ void (*event)(struct xsc_core_device *xdev,
+ enum xsc_dev_event event, unsigned long param);
void (*event_handler)(void *adapter);
void __iomem *bar;
@@ -500,6 +546,9 @@ struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i);
int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, int *eqn,
unsigned int *irqn);
+int xsc_register_interface(struct xsc_interface *intf);
+void xsc_unregister_interface(struct xsc_interface *intf);
+
static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset)
{
if (likely(BITS_PER_LONG == 64 || buf->nbufs == 1))
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
index 3525d1c74..0f4b17dfa 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
@@ -6,4 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
-xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o
+xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o intf.o
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/intf.c b/drivers/net/ethernet/yunsilicon/xsc/pci/intf.c
new file mode 100644
index 000000000..485751b23
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/intf.c
@@ -0,0 +1,279 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/*
+ * Copyright (C) 2021-2024, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ * Copyright (c) 2006, 2007 Cisco Systems, Inc. All rights reserved.
+ * Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "common/xsc_core.h"
+#include "intf.h"
+
+LIST_HEAD(intf_list);
+LIST_HEAD(xsc_dev_list);
+DEFINE_MUTEX(xsc_intf_mutex); // protect intf_list and xsc_dev_list
+
+static void xsc_add_device(struct xsc_interface *intf, struct xsc_priv *priv)
+{
+ struct xsc_device_context *dev_ctx;
+ struct xsc_core_device *xdev;
+
+ xdev = container_of(priv, struct xsc_core_device, priv);
+ dev_ctx = kzalloc(sizeof(*dev_ctx), GFP_KERNEL);
+ if (!dev_ctx)
+ return;
+
+ dev_ctx->intf = intf;
+
+ dev_ctx->context = intf->add(xdev);
+ if (dev_ctx->context) {
+ set_bit(XSC_INTERFACE_ADDED, &dev_ctx->state);
+ if (intf->attach)
+ set_bit(XSC_INTERFACE_ATTACHED, &dev_ctx->state);
+
+ spin_lock_irq(&priv->ctx_lock);
+ list_add_tail(&dev_ctx->list, &priv->ctx_list);
+ spin_unlock_irq(&priv->ctx_lock);
+ } else {
+ kfree(dev_ctx);
+ }
+}
+
+static struct xsc_device_context *xsc_get_device(struct xsc_interface *intf,
+ struct xsc_priv *priv)
+{
+ struct xsc_device_context *dev_ctx;
+
+ /* caller of this function has mutex protection */
+ list_for_each_entry(dev_ctx, &priv->ctx_list, list)
+ if (dev_ctx->intf == intf)
+ return dev_ctx;
+
+ return NULL;
+}
+
+static void xsc_remove_device(struct xsc_interface *intf, struct xsc_priv *priv)
+{
+ struct xsc_core_device *xdev = container_of(priv, struct xsc_core_device, priv);
+ struct xsc_device_context *dev_ctx;
+
+ dev_ctx = xsc_get_device(intf, priv);
+ if (!dev_ctx)
+ return;
+
+ spin_lock_irq(&priv->ctx_lock);
+ list_del(&dev_ctx->list);
+ spin_unlock_irq(&priv->ctx_lock);
+
+ if (test_bit(XSC_INTERFACE_ADDED, &dev_ctx->state))
+ intf->remove(xdev, dev_ctx->context);
+
+ kfree(dev_ctx);
+}
+
+int xsc_register_interface(struct xsc_interface *intf)
+{
+ struct xsc_priv *priv;
+
+ if (!intf->add || !intf->remove)
+ return -EINVAL;
+
+ mutex_lock(&xsc_intf_mutex);
+ list_add_tail(&intf->list, &intf_list);
+ list_for_each_entry(priv, &xsc_dev_list, dev_list) {
+ xsc_add_device(intf, priv);
+ }
+ mutex_unlock(&xsc_intf_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL(xsc_register_interface);
+
+void xsc_unregister_interface(struct xsc_interface *intf)
+{
+ struct xsc_priv *priv;
+
+ mutex_lock(&xsc_intf_mutex);
+ list_for_each_entry(priv, &xsc_dev_list, dev_list)
+ xsc_remove_device(intf, priv);
+ list_del(&intf->list);
+ mutex_unlock(&xsc_intf_mutex);
+}
+EXPORT_SYMBOL(xsc_unregister_interface);
+
+static void xsc_attach_interface(struct xsc_interface *intf,
+ struct xsc_priv *priv)
+{
+ struct xsc_core_device *xdev = container_of(priv, struct xsc_core_device, priv);
+ struct xsc_device_context *dev_ctx;
+
+ dev_ctx = xsc_get_device(intf, priv);
+ if (!dev_ctx)
+ return;
+
+ if (intf->attach) {
+ if (test_bit(XSC_INTERFACE_ATTACHED, &dev_ctx->state))
+ return;
+ if (intf->attach(xdev, dev_ctx->context))
+ return;
+ set_bit(XSC_INTERFACE_ATTACHED, &dev_ctx->state);
+ } else {
+ if (test_bit(XSC_INTERFACE_ADDED, &dev_ctx->state))
+ return;
+ dev_ctx->context = intf->add(xdev);
+ if (!dev_ctx->context)
+ return;
+ set_bit(XSC_INTERFACE_ADDED, &dev_ctx->state);
+ }
+}
+
+static void xsc_detach_interface(struct xsc_interface *intf,
+ struct xsc_priv *priv)
+{
+ struct xsc_core_device *xdev = container_of(priv, struct xsc_core_device, priv);
+ struct xsc_device_context *dev_ctx;
+
+ dev_ctx = xsc_get_device(intf, priv);
+ if (!dev_ctx)
+ return;
+
+ if (intf->detach) {
+ if (!test_bit(XSC_INTERFACE_ATTACHED, &dev_ctx->state))
+ return;
+ intf->detach(xdev, dev_ctx->context);
+ clear_bit(XSC_INTERFACE_ATTACHED, &dev_ctx->state);
+ } else {
+ if (!test_bit(XSC_INTERFACE_ADDED, &dev_ctx->state))
+ return;
+ intf->remove(xdev, dev_ctx->context);
+ clear_bit(XSC_INTERFACE_ADDED, &dev_ctx->state);
+ }
+}
+
+void xsc_attach_device(struct xsc_core_device *xdev)
+{
+ struct xsc_priv *priv = &xdev->priv;
+ struct xsc_interface *intf;
+
+ mutex_lock(&xsc_intf_mutex);
+ list_for_each_entry(intf, &intf_list, list) {
+ xsc_attach_interface(intf, priv);
+ }
+ mutex_unlock(&xsc_intf_mutex);
+}
+
+void xsc_detach_device(struct xsc_core_device *xdev)
+{
+ struct xsc_priv *priv = &xdev->priv;
+ struct xsc_interface *intf;
+
+ mutex_lock(&xsc_intf_mutex);
+ list_for_each_entry(intf, &intf_list, list)
+ xsc_detach_interface(intf, priv);
+ mutex_unlock(&xsc_intf_mutex);
+}
+
+bool xsc_device_registered(struct xsc_core_device *xdev)
+{
+ struct xsc_priv *priv;
+ bool found = false;
+
+ mutex_lock(&xsc_intf_mutex);
+ list_for_each_entry(priv, &xsc_dev_list, dev_list)
+ if (priv == &xdev->priv)
+ found = true;
+ mutex_unlock(&xsc_intf_mutex);
+
+ return found;
+}
+
+int xsc_register_device(struct xsc_core_device *xdev)
+{
+ struct xsc_priv *priv = &xdev->priv;
+ struct xsc_interface *intf;
+
+ mutex_lock(&xsc_intf_mutex);
+ list_add_tail(&priv->dev_list, &xsc_dev_list);
+ list_for_each_entry(intf, &intf_list, list)
+ xsc_add_device(intf, priv);
+ mutex_unlock(&xsc_intf_mutex);
+
+ return 0;
+}
+
+void xsc_unregister_device(struct xsc_core_device *xdev)
+{
+ struct xsc_priv *priv = &xdev->priv;
+ struct xsc_interface *intf;
+
+ mutex_lock(&xsc_intf_mutex);
+ list_for_each_entry_reverse(intf, &intf_list, list)
+ xsc_remove_device(intf, priv);
+ list_del(&priv->dev_list);
+ mutex_unlock(&xsc_intf_mutex);
+}
+
+void xsc_add_dev_by_protocol(struct xsc_core_device *xdev, int protocol)
+{
+ struct xsc_interface *intf;
+
+ list_for_each_entry(intf, &intf_list, list)
+ if (intf->protocol == protocol) {
+ xsc_add_device(intf, &xdev->priv);
+ break;
+ }
+}
+
+void xsc_remove_dev_by_protocol(struct xsc_core_device *xdev, int protocol)
+{
+ struct xsc_interface *intf;
+
+ list_for_each_entry(intf, &intf_list, list)
+ if (intf->protocol == protocol) {
+ xsc_remove_device(intf, &xdev->priv);
+ break;
+ }
+}
+
+void xsc_dev_list_lock(void)
+{
+ mutex_lock(&xsc_intf_mutex);
+}
+
+void xsc_dev_list_unlock(void)
+{
+ mutex_unlock(&xsc_intf_mutex);
+}
+
+int xsc_dev_list_trylock(void)
+{
+ return mutex_trylock(&xsc_intf_mutex);
+}
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/intf.h b/drivers/net/ethernet/yunsilicon/xsc/pci/intf.h
new file mode 100644
index 000000000..acb9f4526
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/intf.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef INTF_H
+#define INTF_H
+
+#include "common/xsc_core.h"
+
+void xsc_attach_device(struct xsc_core_device *xdev);
+void xsc_detach_device(struct xsc_core_device *xdev);
+bool xsc_device_registered(struct xsc_core_device *xdev);
+int xsc_register_device(struct xsc_core_device *xdev);
+void xsc_unregister_device(struct xsc_core_device *xdev);
+void xsc_add_dev_by_protocol(struct xsc_core_device *xdev, int protocol);
+void xsc_remove_dev_by_protocol(struct xsc_core_device *xdev, int protocol);
+void xsc_dev_list_lock(void);
+void xsc_dev_list_unlock(void);
+int xsc_dev_list_trylock(void);
+
+#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
index ea9bda0e6..27ce9ee90 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
@@ -10,6 +10,7 @@
#include "cq.h"
#include "eq.h"
#include "pci_irq.h"
+#include "intf.h"
unsigned int xsc_debug_mask;
module_param_named(debug_mask, xsc_debug_mask, uint, 0644);
@@ -289,10 +290,22 @@ static int xsc_load(struct xsc_core_device *xdev)
goto err_irq_eq_create;
}
+ if (xsc_device_registered(xdev)) {
+ xsc_attach_device(xdev);
+ } else {
+ err = xsc_register_device(xdev);
+ if (err) {
+ xsc_core_err(xdev, "register device failed %d\n", err);
+ goto err_reg_dev;
+ }
+ }
+
set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state);
mutex_unlock(&xdev->intf_state_mutex);
return 0;
+err_reg_dev:
+ xsc_irq_eq_destroy(xdev);
err_irq_eq_create:
xsc_hw_cleanup(xdev);
out:
@@ -302,6 +315,7 @@ static int xsc_load(struct xsc_core_device *xdev)
static int xsc_unload(struct xsc_core_device *xdev)
{
+ xsc_unregister_device(xdev);
mutex_lock(&xdev->intf_state_mutex);
if (!test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) {
xsc_core_warn(xdev, "%s: interface is down, NOP\n",
@@ -311,6 +325,8 @@ static int xsc_unload(struct xsc_core_device *xdev)
}
clear_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state);
+ if (xsc_device_registered(xdev))
+ xsc_detach_device(xdev);
xsc_irq_eq_destroy(xdev);
xsc_hw_cleanup(xdev);
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 08/16] net-next/yunsilicon: Add ethernet interface
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (6 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 07/16] net-next/yunsilicon: Device and interface management Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 09/16] net-next/yunsilicon: Init net device Xin Tian
` (8 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Build a basic netdevice driver
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
drivers/net/ethernet/yunsilicon/Makefile | 2 +-
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 1 +
.../net/ethernet/yunsilicon/xsc/net/main.c | 118 ++++++++++++++++++
.../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 16 +++
.../yunsilicon/xsc/net/xsc_eth_common.h | 15 +++
5 files changed, 151 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/main.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
diff --git a/drivers/net/ethernet/yunsilicon/Makefile b/drivers/net/ethernet/yunsilicon/Makefile
index 6fc8259a7..65b9a6265 100644
--- a/drivers/net/ethernet/yunsilicon/Makefile
+++ b/drivers/net/ethernet/yunsilicon/Makefile
@@ -4,5 +4,5 @@
# Makefile for the Yunsilicon device drivers.
#
-# obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/
+obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/
obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc/pci/
\ No newline at end of file
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index fc2d3d01b..b78443bbf 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -497,6 +497,7 @@ struct xsc_core_device {
struct pci_dev *pdev;
struct device *device;
struct xsc_priv priv;
+ void *netdev;
void *eth_priv;
struct xsc_dev_resource *dev_res;
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
new file mode 100644
index 000000000..e265016eb
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include "common/xsc_core.h"
+#include "xsc_eth_common.h"
+#include "xsc_eth.h"
+
+static int xsc_get_max_num_channels(struct xsc_core_device *xdev)
+{
+ return min_t(int, xdev->dev_res->eq_table.num_comp_vectors,
+ XSC_ETH_MAX_NUM_CHANNELS);
+}
+
+static void *xsc_eth_add(struct xsc_core_device *xdev)
+{
+ struct xsc_adapter *adapter;
+ struct net_device *netdev;
+ int num_chl, num_tc;
+ int err;
+
+ num_chl = xsc_get_max_num_channels(xdev);
+ num_tc = xdev->caps.max_tc;
+
+ netdev = alloc_etherdev_mqs(sizeof(struct xsc_adapter),
+ num_chl * num_tc, num_chl);
+ if (!netdev) {
+ xsc_core_warn(xdev, "alloc_etherdev_mqs failed, txq=%d, rxq=%d\n",
+ (num_chl * num_tc), num_chl);
+ return NULL;
+ }
+
+ netdev->dev.parent = &xdev->pdev->dev;
+ adapter = netdev_priv(netdev);
+ adapter->netdev = netdev;
+ adapter->pdev = xdev->pdev;
+ adapter->dev = &adapter->pdev->dev;
+ adapter->xdev = (void *)xdev;
+ xdev->eth_priv = adapter;
+
+ err = register_netdev(netdev);
+ if (err) {
+ xsc_core_warn(xdev, "register_netdev failed, err=%d\n", err);
+ goto err_free_netdev;
+ }
+
+ xdev->netdev = (void *)netdev;
+
+ return adapter;
+
+err_free_netdev:
+ free_netdev(netdev);
+
+ return NULL;
+}
+
+static void xsc_eth_remove(struct xsc_core_device *xdev, void *context)
+{
+ struct xsc_adapter *adapter;
+
+ if (!xdev)
+ return;
+
+ adapter = xdev->eth_priv;
+ if (!adapter) {
+ xsc_core_warn(xdev, "failed! adapter is null\n");
+ return;
+ }
+
+ xsc_core_info(adapter->xdev, "remove netdev %s entry\n", adapter->netdev->name);
+
+ unregister_netdev(adapter->netdev);
+
+ free_netdev(adapter->netdev);
+
+ xdev->netdev = NULL;
+ xdev->eth_priv = NULL;
+}
+
+static struct xsc_interface xsc_interface = {
+ .add = xsc_eth_add,
+ .remove = xsc_eth_remove,
+ .event = NULL,
+ .protocol = XSC_INTERFACE_PROTOCOL_ETH,
+};
+
+static void xsc_remove_eth_driver(void)
+{
+ xsc_unregister_interface(&xsc_interface);
+}
+
+static __init int xsc_net_driver_init(void)
+{
+ int ret;
+
+ ret = xsc_register_interface(&xsc_interface);
+ if (ret != 0) {
+ pr_err("failed to register interface\n");
+ goto out;
+ }
+ return 0;
+out:
+ return -1;
+}
+
+static __exit void xsc_net_driver_exit(void)
+{
+ xsc_remove_eth_driver();
+}
+
+module_init(xsc_net_driver_init);
+module_exit(xsc_net_driver_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION(XSC_ETH_DRV_DESC);
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
new file mode 100644
index 000000000..7189acebd
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_ETH_H
+#define XSC_ETH_H
+
+struct xsc_adapter {
+ struct net_device *netdev;
+ struct pci_dev *pdev;
+ struct device *dev;
+ struct xsc_core_device *xdev;
+};
+
+#endif /* XSC_ETH_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
new file mode 100644
index 000000000..55dce1b2b
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_ETH_COMMON_H
+#define XSC_ETH_COMMON_H
+
+#define XSC_LOG_INDIR_RQT_SIZE 0x8
+
+#define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE)
+#define XSC_ETH_MIN_NUM_CHANNELS 2
+#define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE
+
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 09/16] net-next/yunsilicon: Init net device
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (7 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 08/16] net-next/yunsilicon: Add ethernet interface Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 18:28 ` Andrew Lunn
2024-12-18 10:50 ` [PATCH v1 10/16] net-next/yunsilicon: Add eth needed qp and cq apis Xin Tian
` (7 subsequent siblings)
16 siblings, 1 reply; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Initialize network device:
1. initialize hardware
2. configure network parameters
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 1 +
.../yunsilicon/xsc/common/xsc_device.h | 42 +++
.../ethernet/yunsilicon/xsc/common/xsc_pp.h | 38 ++
.../net/ethernet/yunsilicon/xsc/net/main.c | 334 +++++++++++++++++-
.../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 28 ++
.../yunsilicon/xsc/net/xsc_eth_common.h | 45 +++
.../net/ethernet/yunsilicon/xsc/net/xsc_pph.h | 176 +++++++++
.../ethernet/yunsilicon/xsc/net/xsc_queue.h | 76 ++++
8 files changed, 739 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index b78443bbf..432005f11 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -8,6 +8,7 @@
#include <linux/kernel.h>
#include <linux/pci.h>
+#include <linux/if_vlan.h>
#include "common/xsc_cmdq.h"
extern uint xsc_debug_mask;
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h
new file mode 100644
index 000000000..1a4838356
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_DEVICE_H
+#define XSC_DEVICE_H
+
+enum xsc_traffic_types {
+ XSC_TT_IPV4,
+ XSC_TT_IPV4_TCP,
+ XSC_TT_IPV4_UDP,
+ XSC_TT_IPV6,
+ XSC_TT_IPV6_TCP,
+ XSC_TT_IPV6_UDP,
+ XSC_TT_IPV4_IPSEC_AH,
+ XSC_TT_IPV6_IPSEC_AH,
+ XSC_TT_IPV4_IPSEC_ESP,
+ XSC_TT_IPV6_IPSEC_ESP,
+ XSC_TT_ANY,
+ XSC_NUM_TT,
+};
+
+#define XSC_NUM_INDIR_TIRS XSC_NUM_TT
+
+enum {
+ XSC_L3_PROT_TYPE_IPV4 = 1 << 0,
+ XSC_L3_PROT_TYPE_IPV6 = 1 << 1,
+};
+
+enum {
+ XSC_L4_PROT_TYPE_TCP = 1 << 0,
+ XSC_L4_PROT_TYPE_UDP = 1 << 1,
+};
+
+struct xsc_tirc_config {
+ u8 l3_prot_type;
+ u8 l4_prot_type;
+ u32 rx_hash_fields;
+};
+
+#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h
new file mode 100644
index 000000000..c428555e0
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_PP_H
+#define XSC_PP_H
+
+enum {
+ XSC_HASH_FIELD_SEL_SRC_IP = 1 << 0,
+ XSC_HASH_FIELD_SEL_PROTO = 1 << 1,
+ XSC_HASH_FIELD_SEL_DST_IP = 1 << 2,
+ XSC_HASH_FIELD_SEL_SPORT = 1 << 3,
+ XSC_HASH_FIELD_SEL_DPORT = 1 << 4,
+ XSC_HASH_FIELD_SEL_SRC_IPV6 = 1 << 5,
+ XSC_HASH_FIELD_SEL_DST_IPV6 = 1 << 6,
+ XSC_HASH_FIELD_SEL_SPORT_V6 = 1 << 7,
+ XSC_HASH_FIELD_SEL_DPORT_V6 = 1 << 8,
+};
+
+#define XSC_HASH_IP (XSC_HASH_FIELD_SEL_SRC_IP |\
+ XSC_HASH_FIELD_SEL_DST_IP |\
+ XSC_HASH_FIELD_SEL_PROTO)
+#define XSC_HASH_IP_PORTS (XSC_HASH_FIELD_SEL_SRC_IP |\
+ XSC_HASH_FIELD_SEL_DST_IP |\
+ XSC_HASH_FIELD_SEL_SPORT |\
+ XSC_HASH_FIELD_SEL_DPORT |\
+ XSC_HASH_FIELD_SEL_PROTO)
+#define XSC_HASH_IP6 (XSC_HASH_FIELD_SEL_SRC_IPV6 |\
+ XSC_HASH_FIELD_SEL_DST_IPV6 |\
+ XSC_HASH_FIELD_SEL_PROTO)
+#define XSC_HASH_IP6_PORTS (XSC_HASH_FIELD_SEL_SRC_IPV6 |\
+ XSC_HASH_FIELD_SEL_DST_IPV6 |\
+ XSC_HASH_FIELD_SEL_SPORT_V6 |\
+ XSC_HASH_FIELD_SEL_DPORT_V6 |\
+ XSC_HASH_FIELD_SEL_PROTO)
+
+#endif /* XSC_PP_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
index e265016eb..9e3369eb9 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
@@ -5,20 +5,335 @@
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
+#include <linux/ethtool.h>
#include "common/xsc_core.h"
+#include "common/xsc_driver.h"
+#include "common/xsc_device.h"
+#include "common/xsc_pp.h"
#include "xsc_eth_common.h"
#include "xsc_eth.h"
+#define XSC_ETH_DRV_DESC "Yunsilicon Xsc ethernet driver"
+
+static const struct xsc_tirc_config tirc_default_config[XSC_NUM_INDIR_TIRS] = {
+ [XSC_TT_IPV4] = {
+ .l3_prot_type = XSC_L3_PROT_TYPE_IPV4,
+ .l4_prot_type = 0,
+ .rx_hash_fields = XSC_HASH_IP,
+ },
+ [XSC_TT_IPV4_TCP] = {
+ .l3_prot_type = XSC_L3_PROT_TYPE_IPV4,
+ .l4_prot_type = XSC_L4_PROT_TYPE_TCP,
+ .rx_hash_fields = XSC_HASH_IP_PORTS,
+ },
+ [XSC_TT_IPV4_UDP] = {
+ .l3_prot_type = XSC_L3_PROT_TYPE_IPV4,
+ .l4_prot_type = XSC_L4_PROT_TYPE_UDP,
+ .rx_hash_fields = XSC_HASH_IP_PORTS,
+ },
+ [XSC_TT_IPV6] = {
+ .l3_prot_type = XSC_L3_PROT_TYPE_IPV6,
+ .l4_prot_type = 0,
+ .rx_hash_fields = XSC_HASH_IP6,
+ },
+ [XSC_TT_IPV6_TCP] = {
+ .l3_prot_type = XSC_L3_PROT_TYPE_IPV6,
+ .l4_prot_type = XSC_L4_PROT_TYPE_TCP,
+ .rx_hash_fields = XSC_HASH_IP6_PORTS,
+ },
+ [XSC_TT_IPV6_UDP] = {
+ .l3_prot_type = XSC_L3_PROT_TYPE_IPV6,
+ .l4_prot_type = XSC_L4_PROT_TYPE_UDP,
+ .rx_hash_fields = XSC_HASH_IP6_PORTS,
+ },
+};
+
static int xsc_get_max_num_channels(struct xsc_core_device *xdev)
{
return min_t(int, xdev->dev_res->eq_table.num_comp_vectors,
XSC_ETH_MAX_NUM_CHANNELS);
}
+static void xsc_build_default_indir_rqt(u32 *indirection_rqt, int len,
+ int num_channels)
+{
+ int i;
+
+ for (i = 0; i < len; i++)
+ indirection_rqt[i] = i % num_channels;
+}
+
+static void xsc_build_rss_param(struct xsc_rss_params *rss_param, u16 num_channels)
+{
+ enum xsc_traffic_types tt;
+
+ rss_param->hfunc = ETH_RSS_HASH_TOP;
+ netdev_rss_key_fill(rss_param->toeplitz_hash_key,
+ sizeof(rss_param->toeplitz_hash_key));
+
+ xsc_build_default_indir_rqt(rss_param->indirection_rqt,
+ XSC_INDIR_RQT_SIZE, num_channels);
+
+ for (tt = 0; tt < XSC_NUM_INDIR_TIRS; tt++) {
+ rss_param->rx_hash_fields[tt] =
+ tirc_default_config[tt].rx_hash_fields;
+ }
+ rss_param->rss_hash_tmpl = XSC_HASH_IP_PORTS | XSC_HASH_IP6_PORTS;
+}
+
+static void xsc_eth_build_nic_params(struct xsc_adapter *adapter, u32 ch_num, u32 tc_num)
+{
+ struct xsc_eth_params *params = &adapter->nic_param;
+ struct xsc_core_device *xdev = adapter->xdev;
+
+ params->mtu = SW_DEFAULT_MTU;
+ params->num_tc = tc_num;
+
+ params->comp_vectors = xdev->dev_res->eq_table.num_comp_vectors;
+ params->max_num_ch = ch_num;
+ params->num_channels = ch_num;
+
+ params->rq_max_size = BIT(xdev->caps.log_max_qp_depth);
+ params->sq_max_size = BIT(xdev->caps.log_max_qp_depth);
+ xsc_build_rss_param(&adapter->rss_param, adapter->nic_param.num_channels);
+
+ xsc_core_info(xdev, "mtu=%d, num_ch=%d(max=%d), num_tc=%d\n",
+ params->mtu, params->num_channels,
+ params->max_num_ch, params->num_tc);
+}
+
+static int xsc_eth_netdev_init(struct xsc_adapter *adapter)
+{
+ unsigned int node, tc, nch;
+
+ tc = adapter->nic_param.num_tc;
+ nch = adapter->nic_param.max_num_ch;
+ node = dev_to_node(adapter->dev);
+ adapter->txq2sq = kcalloc_node(nch * tc,
+ sizeof(*adapter->txq2sq), GFP_KERNEL, node);
+ if (!adapter->txq2sq)
+ goto err_out;
+
+ adapter->workq = create_singlethread_workqueue("xsc_eth");
+ if (!adapter->workq)
+ goto err_free_priv;
+
+ netif_carrier_off(adapter->netdev);
+
+ return 0;
+
+err_free_priv:
+ kfree(adapter->txq2sq);
+err_out:
+ return -ENOMEM;
+}
+
+static int xsc_eth_close(struct net_device *netdev)
+{
+ return 0;
+}
+
+static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz)
+{
+ struct xsc_set_mtu_mbox_in in;
+ struct xsc_set_mtu_mbox_out out;
+ int ret;
+
+ memset(&in, 0, sizeof(struct xsc_set_mtu_mbox_in));
+ memset(&out, 0, sizeof(struct xsc_set_mtu_mbox_out));
+
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_SET_MTU);
+ in.mtu = cpu_to_be16(mtu);
+ in.rx_buf_sz_min = cpu_to_be16(rx_buf_sz);
+ in.mac_port = xdev->mac_port;
+
+ ret = xsc_cmd_exec(xdev, &in, sizeof(struct xsc_set_mtu_mbox_in), &out,
+ sizeof(struct xsc_set_mtu_mbox_out));
+ if (ret || out.hdr.status) {
+ xsc_core_err(xdev, "failed to set hw_mtu=%u rx_buf_sz=%u, err=%d, status=%d\n",
+ mtu, rx_buf_sz, ret, out.hdr.status);
+ ret = -ENOEXEC;
+ }
+
+ return ret;
+}
+
+static const struct net_device_ops xsc_netdev_ops = {
+ // TBD
+};
+
+static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+
+ /* Set up network device as normal. */
+ netdev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE;
+ netdev->netdev_ops = &xsc_netdev_ops;
+
+ netdev->min_mtu = SW_MIN_MTU;
+ netdev->max_mtu = SW_MAX_MTU;
+ /*mtu - macheaderlen - ipheaderlen should be aligned in 8B*/
+ netdev->mtu = SW_DEFAULT_MTU;
+
+ netdev->vlan_features |= NETIF_F_SG;
+ netdev->vlan_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;//NETIF_F_HW_CSUM;
+ netdev->vlan_features |= NETIF_F_GRO;
+ netdev->vlan_features |= NETIF_F_TSO;//NETIF_F_TSO_ECN
+ netdev->vlan_features |= NETIF_F_TSO6;
+
+ netdev->vlan_features |= NETIF_F_RXCSUM;
+ netdev->vlan_features |= NETIF_F_RXHASH;
+ netdev->vlan_features |= NETIF_F_GSO_PARTIAL;
+
+ netdev->hw_features = netdev->vlan_features;
+
+ netdev->features |= netdev->hw_features;
+ netdev->features |= NETIF_F_HIGHDMA;
+}
+
+static int xsc_eth_nic_init(struct xsc_adapter *adapter,
+ void *rep_priv, u32 ch_num, u32 tc_num)
+{
+ int err;
+
+ xsc_eth_build_nic_params(adapter, ch_num, tc_num);
+
+ err = xsc_eth_netdev_init(adapter);
+ if (err)
+ return err;
+
+ xsc_eth_build_nic_netdev(adapter);
+
+ return 0;
+}
+
+static void xsc_eth_nic_cleanup(struct xsc_adapter *adapter)
+{
+ destroy_workqueue(adapter->workq);
+ kfree(adapter->txq2sq);
+}
+
+static int xsc_eth_get_mac(struct xsc_core_device *xdev, char *mac)
+{
+ struct xsc_query_eth_mac_mbox_out *out;
+ struct xsc_query_eth_mac_mbox_in in;
+ int err;
+
+ out = kzalloc(sizeof(*out), GFP_KERNEL);
+ if (!out)
+ return -ENOMEM;
+
+ memset(&in, 0, sizeof(in));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_ETH_MAC);
+
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out));
+ if (err || out->hdr.status) {
+ xsc_core_warn(xdev, "get mac failed! err=%d, out.status=%u\n",
+ err, out->hdr.status);
+ err = -ENOEXEC;
+ goto exit;
+ }
+
+ memcpy(mac, out->mac, 6);
+ xsc_core_dbg(xdev, "get mac %02x:%02x:%02x:%02x:%02x:%02x\n",
+ mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]);
+
+exit:
+ kfree(out);
+
+ return err;
+}
+
+static void xsc_eth_l2_addr_init(struct xsc_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ char mac[6] = {0};
+ int ret = 0;
+
+ ret = xsc_eth_get_mac(adapter->xdev, mac);
+ if (ret) {
+ xsc_core_warn(adapter->xdev, "get mac failed %d, generate random mac...", ret);
+ eth_random_addr(mac);
+ }
+ dev_addr_mod(netdev, 0, mac, 6);
+
+ if (!is_valid_ether_addr(netdev->perm_addr))
+ memcpy(netdev->perm_addr, netdev->dev_addr, netdev->addr_len);
+}
+
+static int xsc_eth_nic_enable(struct xsc_adapter *adapter)
+{
+ struct xsc_core_device *xdev = adapter->xdev;
+
+ xsc_eth_l2_addr_init(adapter);
+
+ xsc_eth_set_hw_mtu(xdev, XSC_SW2HW_MTU(adapter->nic_param.mtu),
+ XSC_SW2HW_RX_PKT_LEN(adapter->nic_param.mtu));
+
+ rtnl_lock();
+ netif_device_attach(adapter->netdev);
+ rtnl_unlock();
+
+ return 0;
+}
+
+static void xsc_eth_nic_disable(struct xsc_adapter *adapter)
+{
+ rtnl_lock();
+ if (netif_running(adapter->netdev))
+ xsc_eth_close(adapter->netdev);
+ netif_device_detach(adapter->netdev);
+ rtnl_unlock();
+}
+
+static int xsc_attach_netdev(struct xsc_adapter *adapter)
+{
+ int err = -1;
+
+ err = xsc_eth_nic_enable(adapter);
+ if (err)
+ return err;
+
+ xsc_core_info(adapter->xdev, "%s ok\n", __func__);
+ return 0;
+}
+
+static void xsc_detach_netdev(struct xsc_adapter *adapter)
+{
+ xsc_eth_nic_disable(adapter);
+
+ flush_workqueue(adapter->workq);
+ adapter->status = XSCALE_ETH_DRIVER_DETACH;
+}
+
+static int xsc_eth_attach(struct xsc_core_device *xdev, struct xsc_adapter *adapter)
+{
+ int err = -1;
+
+ if (netif_device_present(adapter->netdev))
+ return 0;
+
+ err = xsc_attach_netdev(adapter);
+ if (err)
+ return err;
+
+ xsc_core_info(adapter->xdev, "%s ok\n", __func__);
+ return 0;
+}
+
+static void xsc_eth_detach(struct xsc_core_device *xdev, struct xsc_adapter *adapter)
+{
+ if (!netif_device_present(adapter->netdev))
+ return;
+
+ xsc_detach_netdev(adapter);
+}
+
static void *xsc_eth_add(struct xsc_core_device *xdev)
{
struct xsc_adapter *adapter;
struct net_device *netdev;
+ void *rep_priv = NULL;
int num_chl, num_tc;
int err;
@@ -41,16 +356,33 @@ static void *xsc_eth_add(struct xsc_core_device *xdev)
adapter->xdev = (void *)xdev;
xdev->eth_priv = adapter;
+ err = xsc_eth_nic_init(adapter, rep_priv, num_chl, num_tc);
+ if (err) {
+ xsc_core_warn(xdev, "xsc_nic_init failed, num_ch=%d, num_tc=%d, err=%d\n",
+ num_chl, num_tc, err);
+ goto err_free_netdev;
+ }
+
+ err = xsc_eth_attach(xdev, adapter);
+ if (err) {
+ xsc_core_warn(xdev, "xsc_eth_attach failed, err=%d\n", err);
+ goto err_cleanup_netdev;
+ }
+
err = register_netdev(netdev);
if (err) {
xsc_core_warn(xdev, "register_netdev failed, err=%d\n", err);
- goto err_free_netdev;
+ goto err_detach;
}
xdev->netdev = (void *)netdev;
return adapter;
+err_detach:
+ xsc_eth_detach(xdev, adapter);
+err_cleanup_netdev:
+ xsc_eth_nic_cleanup(adapter);
err_free_netdev:
free_netdev(netdev);
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
index 7189acebd..cf16d98cb 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
@@ -6,11 +6,39 @@
#ifndef XSC_ETH_H
#define XSC_ETH_H
+#include "common/xsc_device.h"
+#include "xsc_eth_common.h"
+
+enum {
+ XSCALE_ETH_DRIVER_INIT,
+ XSCALE_ETH_DRIVER_OK,
+ XSCALE_ETH_DRIVER_CLOSE,
+ XSCALE_ETH_DRIVER_DETACH,
+};
+
+struct xsc_rss_params {
+ u32 indirection_rqt[XSC_INDIR_RQT_SIZE];
+ u32 rx_hash_fields[XSC_NUM_INDIR_TIRS];
+ u8 toeplitz_hash_key[52];
+ u8 hfunc;
+ u32 rss_hash_tmpl;
+};
+
struct xsc_adapter {
struct net_device *netdev;
struct pci_dev *pdev;
struct device *dev;
struct xsc_core_device *xdev;
+
+ struct xsc_eth_params nic_param;
+ struct xsc_rss_params rss_param;
+
+ struct workqueue_struct *workq;
+
+ struct xsc_sq **txq2sq;
+
+ u32 status;
+ struct mutex status_lock; // protect status
};
#endif /* XSC_ETH_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
index 55dce1b2b..9a878cfb7 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
@@ -6,10 +6,55 @@
#ifndef XSC_ETH_COMMON_H
#define XSC_ETH_COMMON_H
+#include "xsc_pph.h"
+
+#define SW_MIN_MTU ETH_MIN_MTU
+#define SW_DEFAULT_MTU ETH_DATA_LEN
+#define SW_MAX_MTU 9600
+
+#define XSC_ETH_HW_MTU_SEND 9800
+#define XSC_ETH_HW_MTU_RECV 9800
+#define XSC_ETH_HARD_MTU (ETH_HLEN + VLAN_HLEN * 2 + ETH_FCS_LEN)
+#define XSC_SW2HW_MTU(mtu) ((mtu) + XSC_ETH_HARD_MTU)
+#define XSC_SW2HW_FRAG_SIZE(mtu) ((mtu) + XSC_ETH_HARD_MTU)
+#define XSC_ETH_RX_MAX_HEAD_ROOM 256
+#define XSC_SW2HW_RX_PKT_LEN(mtu) ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM)
+
#define XSC_LOG_INDIR_RQT_SIZE 0x8
#define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE)
#define XSC_ETH_MIN_NUM_CHANNELS 2
#define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE
+struct xsc_eth_params {
+ u16 num_channels;
+ u16 max_num_ch;
+ u8 num_tc;
+ u32 mtu;
+ u32 hard_mtu;
+ u32 comp_vectors;
+ u32 sq_size;
+ u32 sq_max_size;
+ u8 rq_wq_type;
+ u32 rq_size;
+ u32 rq_max_size;
+ u32 rq_frags_size;
+
+ u16 num_rl_txqs;
+ u8 rx_cqe_compress_def;
+ u8 tunneled_offload_en;
+ u8 lro_en;
+ u8 tx_min_inline_mode;
+ u8 vlan_strip_disable;
+ u8 scatter_fcs_en;
+ u8 rx_dim_enabled;
+ u8 tx_dim_enabled;
+ u32 rx_dim_usecs_low;
+ u32 rx_dim_frames_low;
+ u32 tx_dim_usecs_low;
+ u32 tx_dim_frames_low;
+ u32 lro_timeout;
+ u32 pflags;
+};
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h
new file mode 100644
index 000000000..631c9d40e
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_PPH_H
+#define XSC_PPH_H
+
+#define XSC_PPH_HEAD_LEN 64
+
+enum {
+ L4_PROTO_NONE = 0,
+ L4_PROTO_TCP = 1,
+ L4_PROTO_UDP = 2,
+ L4_PROTO_ICMP = 3,
+ L4_PROTO_GRE = 4,
+};
+
+enum {
+ L3_PROTO_NONE = 0,
+ L3_PROTO_IP = 2,
+ L3_PROTO_IP6 = 3,
+};
+
+struct epp_pph {
+ u16 outer_eth_type; //2 bytes
+ u16 inner_eth_type; //4 bytes
+
+ u16 rsv1:1;
+ u16 outer_vlan_flag:2;
+ u16 outer_ip_type:2;
+ u16 outer_ip_ofst:5;
+ u16 outer_ip_len:6; //6 bytes
+
+ u16 rsv2:1;
+ u16 outer_tp_type:3;
+ u16 outer_tp_csum_flag:1;
+ u16 outer_tp_ofst:7;
+ u16 ext_tunnel_type:4; //8 bytes
+
+ u8 tunnel_ofst; //9 bytes
+ u8 inner_mac_ofst; //10 bytes
+
+ u32 rsv3:2;
+ u32 inner_mac_flag:1;
+ u32 inner_vlan_flag:2;
+ u32 inner_ip_type:2;
+ u32 inner_ip_ofst:8;
+ u32 inner_ip_len:6;
+ u32 inner_tp_type:2;
+ u32 inner_tp_csum_flag:1;
+ u32 inner_tp_ofst:8; //14 bytees
+
+ u16 rsv4:1;
+ u16 payload_type:4;
+ u16 payload_ofst:8;
+ u16 pkt_type:3; //16 bytes
+
+ u16 rsv5:2;
+ u16 pri:3;
+ u16 logical_in_port:11;
+ u16 vlan_info;
+ u8 error_bitmap:8; //21 bytes
+
+ u8 rsv6:7;
+ u8 recirc_id_vld:1;
+ u16 recirc_id; //24 bytes
+
+ u8 rsv7:7;
+ u8 recirc_data_vld:1;
+ u32 recirc_data; //29 bytes
+
+ u8 rsv8:6;
+ u8 mark_tag_vld:2;
+ u16 mark_tag; //32 bytes
+
+ u8 rsv9:4;
+ u8 upa_to_soc:1;
+ u8 upa_from_soc:1;
+ u8 upa_re_up_call:1;
+ u8 upa_pkt_drop:1; //33 bytes
+
+ u8 ucdv;
+ u16 rsv10:2;
+ u16 pkt_len:14; //36 bytes
+
+ u16 rsv11:2;
+ u16 pkt_hdr_ptr:14; //38 bytes
+
+ u64 rsv12:5;
+ u64 csum_ofst:8;
+ u64 csum_val:29;
+ u64 csum_plen:14;
+ u64 rsv11_0:8; //46 bytes
+
+ u64 rsv11_1;
+ u64 rsv11_2;
+ u16 rsv11_3;
+};
+
+#define OUTER_L3_BIT BIT(3)
+#define OUTER_L4_BIT BIT(2)
+#define INNER_L3_BIT BIT(1)
+#define INNER_L4_BIT BIT(0)
+#define OUTER_BIT (OUTER_L3_BIT | OUTER_L4_BIT)
+#define INNER_BIT (INNER_L3_BIT | INNER_L4_BIT)
+#define OUTER_AND_INNER (OUTER_BIT | INNER_BIT)
+
+#define PACKET_UNKNOWN BIT(4)
+
+#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_OFFSET (6UL)
+#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_MASK (0XF00)
+#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_OFFSET (8)
+
+#define EPP2SOC_PPH_EXT_ERROR_BITMAP_OFFSET (20UL)
+#define EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_MASK (0XFF)
+#define EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_OFFSET (0)
+
+#define XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(PPH_BASE_ADDR) \
+ ((*(u16 *)((u8 *)(PPH_BASE_ADDR) + EPP2SOC_PPH_EXT_TUNNEL_TYPE_OFFSET) & \
+ EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_MASK) >> EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_OFFSET)
+
+#define XSC_GET_EPP2SOC_PPH_ERROR_BITMAP(PPH_BASE_ADDR) \
+ ((*(u8 *)((u8 *)(PPH_BASE_ADDR) + EPP2SOC_PPH_EXT_ERROR_BITMAP_OFFSET) & \
+ EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_MASK) >> EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_OFFSET)
+
+#define PPH_OUTER_IP_TYPE_OFF (4UL)
+#define PPH_OUTER_IP_TYPE_MASK (0x3)
+#define PPH_OUTER_IP_TYPE_SHIFT (11)
+#define PPH_OUTER_IP_TYPE(base) \
+ ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_TYPE_OFF)) >> \
+ PPH_OUTER_IP_TYPE_SHIFT) & PPH_OUTER_IP_TYPE_MASK)
+
+#define PPH_OUTER_IP_OFST_OFF (4UL)
+#define PPH_OUTER_IP_OFST_MASK (0x1f)
+#define PPH_OUTER_IP_OFST_SHIFT (6)
+#define PPH_OUTER_IP_OFST(base) \
+ ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_OFST_OFF)) >> \
+ PPH_OUTER_IP_OFST_SHIFT) & PPH_OUTER_IP_OFST_MASK)
+
+#define PPH_OUTER_IP_LEN_OFF (4UL)
+#define PPH_OUTER_IP_LEN_MASK (0x3f)
+#define PPH_OUTER_IP_LEN_SHIFT (0)
+#define PPH_OUTER_IP_LEN(base) \
+ ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_LEN_OFF)) >> \
+ PPH_OUTER_IP_LEN_SHIFT) & PPH_OUTER_IP_LEN_MASK)
+
+#define PPH_OUTER_TP_TYPE_OFF (6UL)
+#define PPH_OUTER_TP_TYPE_MASK (0x7)
+#define PPH_OUTER_TP_TYPE_SHIFT (12)
+#define PPH_OUTER_TP_TYPE(base) \
+ ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_TP_TYPE_OFF)) >> \
+ PPH_OUTER_TP_TYPE_SHIFT) & PPH_OUTER_TP_TYPE_MASK)
+
+#define PPH_PAYLOAD_OFST_OFF (14UL)
+#define PPH_PAYLOAD_OFST_MASK (0xff)
+#define PPH_PAYLOAD_OFST_SHIFT (3)
+#define PPH_PAYLOAD_OFST(base) \
+ ((ntohs(*(u16 *)((u8 *)(base) + PPH_PAYLOAD_OFST_OFF)) >> \
+ PPH_PAYLOAD_OFST_SHIFT) & PPH_PAYLOAD_OFST_MASK)
+
+#define PPH_CSUM_OFST_OFF (38UL)
+#define PPH_CSUM_OFST_MASK (0xff)
+#define PPH_CSUM_OFST_SHIFT (51)
+#define PPH_CSUM_OFST(base) \
+ ((be64_to_cpu(*(u64 *)((u8 *)(base) + PPH_CSUM_OFST_OFF)) >> \
+ PPH_CSUM_OFST_SHIFT) & PPH_CSUM_OFST_MASK)
+
+#define PPH_CSUM_VAL_OFF (38UL)
+#define PPH_CSUM_VAL_MASK (0xeffffff)
+#define PPH_CSUM_VAL_SHIFT (22)
+#define PPH_CSUM_VAL(base) \
+ ((be64_to_cpu(*(u64 *)((u8 *)(base) + PPH_CSUM_VAL_OFF)) >> \
+ PPH_CSUM_VAL_SHIFT) & PPH_CSUM_VAL_MASK)
+#endif /* XSC_TBM_H */
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
new file mode 100644
index 000000000..ba2601361
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/*
+ * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef XSC_QUEUE_H
+#define XSC_QUEUE_H
+
+#include "common/xsc_core.h"
+
+struct xsc_sq {
+ struct xsc_core_qp cqp;
+ /* dirtied @completion */
+ u16 cc;
+ u32 dma_fifo_cc;
+
+ /* dirtied @xmit */
+ u16 pc ____cacheline_aligned_in_smp;
+ u32 dma_fifo_pc;
+
+ struct xsc_cq cq;
+
+ /* read only */
+ struct xsc_wq_cyc wq;
+ u32 dma_fifo_mask;
+ struct {
+ struct xsc_sq_dma *dma_fifo;
+ struct xsc_tx_wqe_info *wqe_info;
+ } db;
+ void __iomem *uar_map;
+ struct netdev_queue *txq;
+ u32 sqn;
+ u16 stop_room;
+
+ __be32 mkey_be;
+ unsigned long state;
+ unsigned int hw_mtu;
+
+ /* control path */
+ struct xsc_wq_ctrl wq_ctrl;
+ struct xsc_channel *channel;
+ int ch_ix;
+ int txq_ix;
+ struct work_struct recover_work;
+} ____cacheline_aligned_in_smp;
+
+#endif /* XSC_QUEUE_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 10/16] net-next/yunsilicon: Add eth needed qp and cq apis
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (8 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 09/16] net-next/yunsilicon: Init net device Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 11/16] net-next/yunsilicon: ndo_open and ndo_stop Xin Tian
` (6 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add eth needed qp and cq apis
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 18 ++
.../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +-
.../ethernet/yunsilicon/xsc/net/xsc_eth_wq.c | 109 +++++++++
.../ethernet/yunsilicon/xsc/net/xsc_eth_wq.h | 207 ++++++++++++++++++
.../net/ethernet/yunsilicon/xsc/pci/alloc.c | 96 ++++++++
.../net/ethernet/yunsilicon/xsc/pci/alloc.h | 1 -
drivers/net/ethernet/yunsilicon/xsc/pci/cq.c | 112 ++++++++++
drivers/net/ethernet/yunsilicon/xsc/pci/qp.c | 110 ++++++++++
8 files changed, 653 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index 432005f11..a268d8629 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -544,9 +544,27 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev,
struct xsc_core_qp *qp);
void xsc_core_destroy_resource_common(struct xsc_core_device *xdev,
struct xsc_core_qp *qp);
+int xsc_core_eth_create_qp(struct xsc_core_device *xdev,
+ struct xsc_create_qp_mbox_in *in,
+ int insize, u32 *p_qpn);
+int xsc_core_eth_modify_qp_status(struct xsc_core_device *xdev, u32 qpn, u16 status);
+int xsc_core_eth_destroy_qp(struct xsc_core_device *xdev, u32 qpn);
+int xsc_core_eth_create_rss_qp_rqs(struct xsc_core_device *xdev,
+ struct xsc_create_multiqp_mbox_in *in,
+ int insize, int *p_qpn_base);
+int xsc_core_eth_modify_raw_qp(struct xsc_core_device *xdev,
+ struct xsc_modify_raw_qp_mbox_in *in);
+int xsc_core_eth_create_cq(struct xsc_core_device *xdev, struct xsc_core_cq *xcq,
+ struct xsc_create_cq_mbox_in *in, int insize);
+int xsc_core_eth_destroy_cq(struct xsc_core_device *xdev, struct xsc_core_cq *xcq);
+
struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i);
int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, int *eqn,
unsigned int *irqn);
+void xsc_core_fill_page_frag_array(struct xsc_frag_buf *buf, __be64 *pas, int npages);
+int xsc_core_frag_buf_alloc_node(struct xsc_core_device *xdev, int size,
+ struct xsc_frag_buf *buf, int node);
+void xsc_core_frag_buf_free(struct xsc_core_device *xdev, struct xsc_frag_buf *buf);
int xsc_register_interface(struct xsc_interface *intf);
void xsc_unregister_interface(struct xsc_interface *intf);
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
index 2811433af..697046979 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
@@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o
-xsc_eth-y := main.o
\ No newline at end of file
+xsc_eth-y := main.o xsc_eth_wq.o
\ No newline at end of file
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c
new file mode 100644
index 000000000..4647f7f7f
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c
@@ -0,0 +1,109 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/*
+ * Copyright (c) 2021-2024, Shanghai Yunsilicon Technology Co., Ltd. All
+ * rights reserved.
+ * Copyright (c) 2013-2015, Mellanox Technologies, Ltd. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "xsc_eth_wq.h"
+
+u32 xsc_wq_cyc_get_size(struct xsc_wq_cyc *wq)
+{
+ return (u32)wq->fbc.sz_m1 + 1;
+}
+EXPORT_SYMBOL_GPL(xsc_wq_cyc_get_size);
+
+static u32 wq_get_byte_sz(u8 log_sz, u8 log_stride)
+{
+ return ((u32)1 << log_sz) << log_stride;
+}
+
+int xsc_eth_cqwq_create(struct xsc_core_device *xdev, struct xsc_wq_param *param,
+ u8 q_log_size, u8 ele_log_size, struct xsc_cqwq *wq,
+ struct xsc_wq_ctrl *wq_ctrl)
+{
+ u8 log_wq_stride = ele_log_size;
+ u8 log_wq_sz = q_log_size;
+ int err;
+
+ err = xsc_core_frag_buf_alloc_node(xdev, wq_get_byte_sz(log_wq_sz, log_wq_stride),
+ &wq_ctrl->buf,
+ param->buf_numa_node);
+ if (err) {
+ xsc_core_warn(xdev, "xsc_core_frag_buf_alloc_node() failed, %d\n", err);
+ goto err;
+ }
+
+ xsc_init_fbc(wq_ctrl->buf.frags, log_wq_stride, log_wq_sz, &wq->fbc);
+
+ wq_ctrl->xdev = xdev;
+
+ return 0;
+
+err:
+ return err;
+}
+EXPORT_SYMBOL_GPL(xsc_eth_cqwq_create);
+
+int xsc_eth_wq_cyc_create(struct xsc_core_device *xdev, struct xsc_wq_param *param,
+ u8 q_log_size, u8 ele_log_size, struct xsc_wq_cyc *wq,
+ struct xsc_wq_ctrl *wq_ctrl)
+{
+ u8 log_wq_stride = ele_log_size;
+ u8 log_wq_sz = q_log_size;
+ struct xsc_frag_buf_ctrl *fbc = &wq->fbc;
+ int err;
+
+ err = xsc_core_frag_buf_alloc_node(xdev, wq_get_byte_sz(log_wq_sz, log_wq_stride),
+ &wq_ctrl->buf, param->buf_numa_node);
+ if (err) {
+ xsc_core_warn(xdev, "xsc_core_frag_buf_alloc_node() failed, %d\n", err);
+ goto err;
+ }
+
+ xsc_init_fbc(wq_ctrl->buf.frags, log_wq_stride, log_wq_sz, fbc);
+ wq->sz = xsc_wq_cyc_get_size(wq);
+
+ wq_ctrl->xdev = xdev;
+
+ return 0;
+
+err:
+ return err;
+}
+EXPORT_SYMBOL_GPL(xsc_eth_wq_cyc_create);
+
+void xsc_eth_wq_destroy(struct xsc_wq_ctrl *wq_ctrl)
+{
+ xsc_core_frag_buf_free(wq_ctrl->xdev, &wq_ctrl->buf);
+}
+EXPORT_SYMBOL_GPL(xsc_eth_wq_destroy);
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h
new file mode 100644
index 000000000..b677f1482
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/*
+ * Copyright (c) 2021-2024, Shanghai Yunsilicon Technology Co., Ltd. All
+ * rights reserved.
+ * Copyright (c) 2013-2015, Mellanox Technologies, Ltd. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __XSC_WQ_H__
+#define __XSC_WQ_H__
+
+#include "common/xsc_core.h"
+
+struct xsc_wq_param {
+ int buf_numa_node;
+ int db_numa_node;
+};
+
+struct xsc_wq_ctrl {
+ struct xsc_core_device *xdev;
+ struct xsc_frag_buf buf;
+};
+
+struct xsc_wq_cyc {
+ struct xsc_frag_buf_ctrl fbc;
+ u16 sz;
+ u16 wqe_ctr;
+ u16 cur_sz;
+};
+
+struct xsc_cqwq {
+ struct xsc_frag_buf_ctrl fbc;
+ __be32 *db;
+ u32 cc; /* consumer counter */
+};
+
+enum xsc_res_type {
+ XSC_RES_UND = 0,
+ XSC_RES_RQ,
+ XSC_RES_SQ,
+ XSC_RES_MAX,
+};
+
+u32 xsc_wq_cyc_get_size(struct xsc_wq_cyc *wq);
+
+/*api for eth driver*/
+int xsc_eth_cqwq_create(struct xsc_core_device *xdev, struct xsc_wq_param *param,
+ u8 q_log_size, u8 ele_log_size, struct xsc_cqwq *wq,
+ struct xsc_wq_ctrl *wq_ctrl);
+
+int xsc_eth_wq_cyc_create(struct xsc_core_device *xdev, struct xsc_wq_param *param,
+ u8 q_log_size, u8 ele_log_size, struct xsc_wq_cyc *wq,
+ struct xsc_wq_ctrl *wq_ctrl);
+void xsc_eth_wq_destroy(struct xsc_wq_ctrl *wq_ctrl);
+
+static inline void xsc_init_fbc_offset(struct xsc_buf_list *frags,
+ u8 log_stride, u8 log_sz,
+ u16 strides_offset,
+ struct xsc_frag_buf_ctrl *fbc)
+{
+ fbc->frags = frags;
+ fbc->log_stride = log_stride;
+ fbc->log_sz = log_sz;
+ fbc->sz_m1 = (1 << fbc->log_sz) - 1;
+ fbc->log_frag_strides = PAGE_SHIFT - fbc->log_stride;
+ fbc->frag_sz_m1 = (1 << fbc->log_frag_strides) - 1;
+ fbc->strides_offset = strides_offset;
+}
+
+static inline void xsc_init_fbc(struct xsc_buf_list *frags,
+ u8 log_stride, u8 log_sz,
+ struct xsc_frag_buf_ctrl *fbc)
+{
+ xsc_init_fbc_offset(frags, log_stride, log_sz, 0, fbc);
+}
+
+static inline void *xsc_frag_buf_get_wqe(struct xsc_frag_buf_ctrl *fbc,
+ u32 ix)
+{
+ unsigned int frag;
+
+ ix += fbc->strides_offset;
+ frag = ix >> fbc->log_frag_strides;
+
+ return fbc->frags[frag].buf + ((fbc->frag_sz_m1 & ix) << fbc->log_stride);
+}
+
+static inline u32
+xsc_frag_buf_get_idx_last_contig_stride(struct xsc_frag_buf_ctrl *fbc, u32 ix)
+{
+ u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1;
+
+ return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1);
+}
+
+static inline int xsc_wq_cyc_missing(struct xsc_wq_cyc *wq)
+{
+ return wq->sz - wq->cur_sz;
+}
+
+static inline int xsc_wq_cyc_is_empty(struct xsc_wq_cyc *wq)
+{
+ return !wq->cur_sz;
+}
+
+static inline void xsc_wq_cyc_push(struct xsc_wq_cyc *wq)
+{
+ wq->wqe_ctr++;
+ wq->cur_sz++;
+}
+
+static inline void xsc_wq_cyc_push_n(struct xsc_wq_cyc *wq, u8 n)
+{
+ wq->wqe_ctr += n;
+ wq->cur_sz += n;
+}
+
+static inline void xsc_wq_cyc_pop(struct xsc_wq_cyc *wq)
+{
+ wq->cur_sz--;
+}
+
+static inline u16 xsc_wq_cyc_ctr2ix(struct xsc_wq_cyc *wq, u16 ctr)
+{
+ return ctr & wq->fbc.sz_m1;
+}
+
+static inline u16 xsc_wq_cyc_get_head(struct xsc_wq_cyc *wq)
+{
+ return xsc_wq_cyc_ctr2ix(wq, wq->wqe_ctr);
+}
+
+static inline u16 xsc_wq_cyc_get_tail(struct xsc_wq_cyc *wq)
+{
+ return xsc_wq_cyc_ctr2ix(wq, wq->wqe_ctr - wq->cur_sz);
+}
+
+static inline void *xsc_wq_cyc_get_wqe(struct xsc_wq_cyc *wq, u16 ix)
+{
+ return xsc_frag_buf_get_wqe(&wq->fbc, ix);
+}
+
+static inline u32 xsc_cqwq_ctr2ix(struct xsc_cqwq *wq, u32 ctr)
+{
+ return ctr & wq->fbc.sz_m1;
+}
+
+static inline u32 xsc_cqwq_get_ci(struct xsc_cqwq *wq)
+{
+ return xsc_cqwq_ctr2ix(wq, wq->cc);
+}
+
+static inline u32 xsc_cqwq_get_ctr_wrap_cnt(struct xsc_cqwq *wq, u32 ctr)
+{
+ return ctr >> wq->fbc.log_sz;
+}
+
+static inline u32 xsc_cqwq_get_wrap_cnt(struct xsc_cqwq *wq)
+{
+ return xsc_cqwq_get_ctr_wrap_cnt(wq, wq->cc);
+}
+
+static inline void xsc_cqwq_pop(struct xsc_cqwq *wq)
+{
+ wq->cc++;
+}
+
+static inline u32 xsc_cqwq_get_size(struct xsc_cqwq *wq)
+{
+ return wq->fbc.sz_m1 + 1;
+}
+
+static inline struct xsc_cqe *xsc_cqwq_get_wqe(struct xsc_cqwq *wq, u32 ix)
+{
+ struct xsc_cqe *cqe = xsc_frag_buf_get_wqe(&wq->fbc, ix);
+
+ return cqe;
+}
+
+#endif /* __XSC_WQ_H__ */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c
index f95b7f660..95d02f0d7 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c
@@ -127,3 +127,99 @@ void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, int npages)
}
}
EXPORT_SYMBOL_GPL(xsc_fill_page_array);
+
+void xsc_core_fill_page_frag_array(struct xsc_frag_buf *buf, __be64 *pas, int npages)
+{
+ int i;
+ dma_addr_t addr;
+ int shift = PAGE_SHIFT - PAGE_SHIFT_4K;
+ int mask = (1 << shift) - 1;
+
+ for (i = 0; i < npages; i++) {
+ addr = buf->frags[i >> shift].map + ((i & mask) << PAGE_SHIFT_4K);
+ pas[i] = cpu_to_be64(addr);
+ }
+}
+EXPORT_SYMBOL_GPL(xsc_core_fill_page_frag_array);
+
+static void *xsc_dma_zalloc_coherent_node(struct xsc_core_device *xdev,
+ size_t size, dma_addr_t *dma_handle,
+ int node)
+{
+ struct xsc_dev_resource *dev_res = xdev->dev_res;
+ struct device *device = &xdev->pdev->dev;
+ int original_node;
+ void *cpu_handle;
+
+ /* WA for kernels that don't use numa_mem_id in alloc_pages_node */
+ if (node == NUMA_NO_NODE)
+ node = numa_mem_id();
+
+ mutex_lock(&dev_res->alloc_mutex);
+ original_node = dev_to_node(device);
+ set_dev_node(device, node);
+ cpu_handle = dma_alloc_coherent(device, size, dma_handle,
+ GFP_KERNEL);
+ set_dev_node(device, original_node);
+ mutex_unlock(&dev_res->alloc_mutex);
+ return cpu_handle;
+}
+
+int xsc_core_frag_buf_alloc_node(struct xsc_core_device *xdev, int size,
+ struct xsc_frag_buf *buf, int node)
+{
+ int i;
+
+ buf->size = size;
+ buf->npages = DIV_ROUND_UP(size, PAGE_SIZE);
+ buf->page_shift = PAGE_SHIFT;
+ buf->frags = kcalloc(buf->npages, sizeof(struct xsc_buf_list),
+ GFP_KERNEL);
+ if (!buf->frags)
+ goto err_out;
+
+ for (i = 0; i < buf->npages; i++) {
+ struct xsc_buf_list *frag = &buf->frags[i];
+ int frag_sz = min_t(int, size, PAGE_SIZE);
+
+ frag->buf = xsc_dma_zalloc_coherent_node(xdev, frag_sz,
+ &frag->map, node);
+ if (!frag->buf)
+ goto err_free_buf;
+ if (frag->map & ((1 << buf->page_shift) - 1)) {
+ dma_free_coherent(&xdev->pdev->dev, frag_sz,
+ buf->frags[i].buf, buf->frags[i].map);
+ xsc_core_warn(xdev, "unexpected map alignment: %pad, page_shift=%d\n",
+ &frag->map, buf->page_shift);
+ goto err_free_buf;
+ }
+ size -= frag_sz;
+ }
+
+ return 0;
+
+err_free_buf:
+ while (i--)
+ dma_free_coherent(&xdev->pdev->dev, PAGE_SIZE, buf->frags[i].buf,
+ buf->frags[i].map);
+ kfree(buf->frags);
+err_out:
+ return -ENOMEM;
+}
+EXPORT_SYMBOL(xsc_core_frag_buf_alloc_node);
+
+void xsc_core_frag_buf_free(struct xsc_core_device *xdev, struct xsc_frag_buf *buf)
+{
+ int size = buf->size;
+ int i;
+
+ for (i = 0; i < buf->npages; i++) {
+ int frag_sz = min_t(int, size, PAGE_SIZE);
+
+ dma_free_coherent(&xdev->pdev->dev, frag_sz, buf->frags[i].buf,
+ buf->frags[i].map);
+ size -= frag_sz;
+ }
+ kfree(buf->frags);
+}
+EXPORT_SYMBOL(xsc_core_frag_buf_free);
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h
index a53f68eb1..5f1830059 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h
@@ -12,5 +12,4 @@ int xsc_buf_alloc(struct xsc_core_device *xdev, int size, int max_direct,
struct xsc_buf *buf);
void xsc_buf_free(struct xsc_core_device *xdev, struct xsc_buf *buf);
void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, int npages);
-
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c
index ed0423ef2..385383797 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c
@@ -4,6 +4,7 @@
*/
#include "common/xsc_core.h"
+#include "common/xsc_driver.h"
#include "cq.h"
void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type)
@@ -37,3 +38,114 @@ void xsc_init_cq_table(struct xsc_core_device *xdev)
spin_lock_init(&table->lock);
INIT_RADIX_TREE(&table->tree, GFP_ATOMIC);
}
+
+static int xsc_create_cq(struct xsc_core_device *xdev, u32 *p_cqn,
+ struct xsc_create_cq_mbox_in *in, int insize)
+{
+ struct xsc_create_cq_mbox_out out;
+ int ret;
+
+ memset(&out, 0, sizeof(out));
+ in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_CQ);
+ ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out));
+ if (ret || out.hdr.status) {
+ xsc_core_err(xdev, "failed to create cq, err=%d out.status=%u\n",
+ ret, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ *p_cqn = be32_to_cpu(out.cqn) & 0xffffff;
+ return 0;
+}
+
+static int xsc_destroy_cq(struct xsc_core_device *xdev, u32 cqn)
+{
+ struct xsc_destroy_cq_mbox_in in;
+ struct xsc_destroy_cq_mbox_out out;
+ int ret;
+
+ memset(&in, 0, sizeof(in));
+ memset(&out, 0, sizeof(out));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_CQ);
+ in.cqn = cpu_to_be32(cqn);
+ ret = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (ret || out.hdr.status) {
+ xsc_core_err(xdev, "failed to destroy cq, err=%d out.status=%u\n",
+ ret, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ return 0;
+}
+
+int xsc_core_eth_create_cq(struct xsc_core_device *xdev, struct xsc_core_cq *xcq,
+ struct xsc_create_cq_mbox_in *in, int insize)
+{
+ struct xsc_cq_table *table = &xdev->dev_res->cq_table;
+ u32 cqn;
+ int ret;
+ int err;
+
+ ret = xsc_create_cq(xdev, &cqn, in, insize);
+ if (ret) {
+ xsc_core_err(xdev, "xsc_create_cq failed\n");
+ return -ENOEXEC;
+ }
+ xcq->cqn = cqn;
+ xcq->cons_index = 0;
+ xcq->arm_sn = 0;
+ atomic_set(&xcq->refcount, 1);
+ init_completion(&xcq->free);
+
+ spin_lock_irq(&table->lock);
+ ret = radix_tree_insert(&table->tree, xcq->cqn, xcq);
+ spin_unlock_irq(&table->lock);
+ if (ret)
+ goto err_insert_cq;
+ return 0;
+err_insert_cq:
+ err = xsc_destroy_cq(xdev, cqn);
+ if (err)
+ xsc_core_warn(xdev, "failed to destroy cqn=%d, err=%d\n", xcq->cqn, err);
+ return ret;
+}
+EXPORT_SYMBOL(xsc_core_eth_create_cq);
+
+int xsc_core_eth_destroy_cq(struct xsc_core_device *xdev, struct xsc_core_cq *xcq)
+{
+ struct xsc_cq_table *table = &xdev->dev_res->cq_table;
+ struct xsc_core_cq *tmp;
+ int err;
+
+ spin_lock_irq(&table->lock);
+ tmp = radix_tree_delete(&table->tree, xcq->cqn);
+ spin_unlock_irq(&table->lock);
+ if (!tmp) {
+ err = -ENOENT;
+ goto err_delete_cq;
+ }
+
+ if (tmp != xcq) {
+ err = -EINVAL;
+ goto err_delete_cq;
+ }
+
+ err = xsc_destroy_cq(xdev, xcq->cqn);
+ if (err)
+ goto err_destroy_cq;
+
+ if (atomic_dec_and_test(&xcq->refcount))
+ complete(&xcq->free);
+ wait_for_completion(&xcq->free);
+ return 0;
+
+err_destroy_cq:
+ xsc_core_warn(xdev, "failed to destroy cqn=%d, err=%d\n",
+ xcq->cqn, err);
+ return err;
+err_delete_cq:
+ xsc_core_warn(xdev, "cqn=%d not found in tree, err=%d\n",
+ xcq->cqn, err);
+ return err;
+}
+EXPORT_SYMBOL(xsc_core_eth_destroy_cq);
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c
index de58a21b5..78ec90e58 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c
@@ -8,6 +8,7 @@
#include <linux/export.h>
#include <linux/kthread.h>
#include "common/xsc_core.h"
+#include "common/xsc_driver.h"
#include "qp.h"
int xsc_core_create_resource_common(struct xsc_core_device *xdev,
@@ -77,3 +78,112 @@ void xsc_init_qp_table(struct xsc_core_device *xdev)
spin_lock_init(&table->lock);
INIT_RADIX_TREE(&table->tree, GFP_ATOMIC);
}
+
+int xsc_core_eth_create_qp(struct xsc_core_device *xdev,
+ struct xsc_create_qp_mbox_in *in,
+ int insize, u32 *p_qpn)
+{
+ struct xsc_create_qp_mbox_out out;
+ int ret;
+
+ in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_QP);
+ ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out));
+ if (ret || out.hdr.status) {
+ xsc_core_err(xdev, "failed to create sq, err=%d out.status=%u\n",
+ ret, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ *p_qpn = be32_to_cpu(out.qpn) & 0xffffff;
+
+ return 0;
+}
+EXPORT_SYMBOL(xsc_core_eth_create_qp);
+
+int xsc_core_eth_modify_qp_status(struct xsc_core_device *xdev, u32 qpn, u16 status)
+{
+ struct xsc_modify_qp_mbox_in in;
+ struct xsc_modify_qp_mbox_out out;
+ int ret = 0;
+
+ in.hdr.opcode = cpu_to_be16(status);
+ in.qpn = cpu_to_be32(qpn);
+ in.no_need_wait = 1;
+
+ ret = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (ret || out.hdr.status != 0) {
+ xsc_core_err(xdev, "failed to modify qp %u status=%u, err=%d out.status %u\n",
+ qpn, status, ret, out.hdr.status);
+ ret = -ENOEXEC;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xsc_core_eth_modify_qp_status);
+
+int xsc_core_eth_destroy_qp(struct xsc_core_device *xdev, u32 qpn)
+{
+ struct xsc_destroy_qp_mbox_in in;
+ struct xsc_destroy_qp_mbox_out out;
+ int err;
+
+ err = xsc_core_eth_modify_qp_status(xdev, qpn, XSC_CMD_OP_2RST_QP);
+ if (err) {
+ xsc_core_warn(xdev, "failed to set sq%d status=rst, err=%d\n", qpn, err);
+ return err;
+ }
+
+ memset(&in, 0, sizeof(in));
+ memset(&out, 0, sizeof(out));
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_QP);
+ in.qpn = cpu_to_be32(qpn);
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err || out.hdr.status) {
+ xsc_core_err(xdev, "failed to destroy sq%d, err=%d out.status=%u\n",
+ qpn, err, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(xsc_core_eth_destroy_qp);
+
+int xsc_core_eth_modify_raw_qp(struct xsc_core_device *xdev, struct xsc_modify_raw_qp_mbox_in *in)
+{
+ struct xsc_modify_raw_qp_mbox_out out;
+ int ret;
+
+ in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_MODIFY_RAW_QP);
+
+ ret = xsc_cmd_exec(xdev, in, sizeof(struct xsc_modify_raw_qp_mbox_in),
+ &out, sizeof(struct xsc_modify_raw_qp_mbox_out));
+ if (ret || out.hdr.status) {
+ xsc_core_err(xdev, "failed to modify sq, err=%d out.status=%u\n",
+ ret, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(xsc_core_eth_modify_raw_qp);
+
+int xsc_core_eth_create_rss_qp_rqs(struct xsc_core_device *xdev,
+ struct xsc_create_multiqp_mbox_in *in,
+ int insize, int *p_qpn_base)
+{
+ int ret;
+ struct xsc_create_multiqp_mbox_out out;
+
+ in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_MULTI_QP);
+ ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out));
+ if (ret || out.hdr.status) {
+ xsc_core_err(xdev,
+ "failed to create rss rq, qp_num=%d, type=%d, err=%d out.status=%u\n",
+ in->qp_num, in->qp_type, ret, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ *p_qpn_base = be32_to_cpu(out.qpn_base) & 0xffffff;
+ return 0;
+}
+EXPORT_SYMBOL(xsc_core_eth_create_rss_qp_rqs);
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 11/16] net-next/yunsilicon: ndo_open and ndo_stop
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (9 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 10/16] net-next/yunsilicon: Add eth needed qp and cq apis Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 12/16] net-next/yunsilicon: Add ndo_start_xmit Xin Tian
` (5 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add ndo_open and ndo_stop
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 50 +
.../yunsilicon/xsc/common/xsc_device.h | 35 +
.../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +-
.../net/ethernet/yunsilicon/xsc/net/main.c | 1506 ++++++++++++++++-
.../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 8 +
.../yunsilicon/xsc/net/xsc_eth_common.h | 143 ++
.../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 43 +
.../yunsilicon/xsc/net/xsc_eth_txrx.c | 99 ++
.../yunsilicon/xsc/net/xsc_eth_txrx.h | 26 +
.../ethernet/yunsilicon/xsc/net/xsc_queue.h | 145 ++
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +-
.../net/ethernet/yunsilicon/xsc/pci/vport.c | 30 +
12 files changed, 2085 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/vport.c
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index a268d8629..417cb021c 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -237,6 +237,40 @@ enum xsc_event {
XSC_EVENT_TYPE_WQ_ACCESS_ERROR = 0x11,//IBV_EVENT_QP_ACCESS_ERR
};
+struct xsc_cqe {
+ union {
+ u8 msg_opcode;
+ struct {
+ u8 error_code:7;
+ u8 is_error:1;
+ };
+ };
+ __le32 qp_id:15;
+ u8 rsv1:1;
+ u8 se:1;
+ u8 has_pph:1;
+ u8 type:1;
+ u8 with_immdt:1;
+ u8 csum_err:4;
+ __le32 imm_data;
+ __le32 msg_len;
+ __le32 vni;
+ __le64 ts:48;
+ __le16 wqe_id;
+ __le16 rsv[3];
+ __le16 rsv2:15;
+ u8 owner:1;
+};
+
+union xsc_cq_doorbell {
+ struct{
+ u32 cq_next_cid:16;
+ u32 cq_id:15;
+ u32 arm:1;
+ };
+ u32 val;
+};
+
struct xsc_core_cq {
u32 cqn;
int cqe_sz;
@@ -510,6 +544,8 @@ struct xsc_core_device {
int bar_num;
u8 mac_port;
+ u8 pcie_no;
+ u8 pf_id;
u16 glb_func_id;
u16 msix_vec_base;
@@ -538,6 +574,8 @@ struct xsc_core_device {
u32 fw_version_tweak;
u8 fw_version_extra_flag;
cpumask_var_t xps_cpumask;
+
+ u8 user_mode;
};
int xsc_core_create_resource_common(struct xsc_core_device *xdev,
@@ -569,6 +607,8 @@ void xsc_core_frag_buf_free(struct xsc_core_device *xdev, struct xsc_frag_buf *b
int xsc_register_interface(struct xsc_interface *intf);
void xsc_unregister_interface(struct xsc_interface *intf);
+u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport);
+
static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset)
{
if (likely(BITS_PER_LONG == 64 || buf->nbufs == 1))
@@ -583,4 +623,14 @@ static inline bool xsc_fw_is_available(struct xsc_core_device *xdev)
return xdev->cmd.cmd_status == XSC_CMD_STATUS_NORMAL;
}
+static inline void xsc_set_user_mode(struct xsc_core_device *xdev, u8 mode)
+{
+ xdev->user_mode = mode;
+}
+
+static inline u8 xsc_get_user_mode(struct xsc_core_device *xdev)
+{
+ return xdev->user_mode;
+}
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h
index 1a4838356..2cc6cb7c3 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h
@@ -6,6 +6,22 @@
#ifndef XSC_DEVICE_H
#define XSC_DEVICE_H
+#include <linux/types.h>
+#include <linux/ethtool.h>
+
+/* QP type */
+enum {
+ XSC_QUEUE_TYPE_RDMA_RC = 0,
+ XSC_QUEUE_TYPE_RDMA_MAD = 1,
+ XSC_QUEUE_TYPE_RAW = 2,
+ XSC_QUEUE_TYPE_VIRTIO_NET = 3,
+ XSC_QUEUE_TYPE_VIRTIO_BLK = 4,
+ XSC_QUEUE_TYPE_RAW_TPE = 5,
+ XSC_QUEUE_TYPE_RAW_TSO = 6,
+ XSC_QUEUE_TYPE_RAW_TX = 7,
+ XSC_QUEUE_TYPE_INVALID = 0xFF,
+};
+
enum xsc_traffic_types {
XSC_TT_IPV4,
XSC_TT_IPV4_TCP,
@@ -39,4 +55,23 @@ struct xsc_tirc_config {
u32 rx_hash_fields;
};
+enum {
+ XSC_HASH_FUNC_XOR = 0,
+ XSC_HASH_FUNC_TOP = 1,
+ XSC_HASH_FUNC_TOP_SYM = 2,
+ XSC_HASH_FUNC_RSV = 3,
+};
+
+static inline u8 xsc_hash_func_type(u8 hash_func)
+{
+ switch (hash_func) {
+ case ETH_RSS_HASH_TOP:
+ return XSC_HASH_FUNC_TOP;
+ case ETH_RSS_HASH_XOR:
+ return XSC_HASH_FUNC_XOR;
+ default:
+ return XSC_HASH_FUNC_TOP;
+ }
+}
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
index 697046979..104ef5330 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
@@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o
-xsc_eth-y := main.o xsc_eth_wq.o
\ No newline at end of file
+xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_rx.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
index 9e3369eb9..dd2f99537 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
@@ -6,12 +6,14 @@
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/ethtool.h>
+#include <linux/irq.h>
#include "common/xsc_core.h"
#include "common/xsc_driver.h"
#include "common/xsc_device.h"
#include "common/xsc_pp.h"
#include "xsc_eth_common.h"
#include "xsc_eth.h"
+#include "xsc_eth_txrx.h"
#define XSC_ETH_DRV_DESC "Yunsilicon Xsc ethernet driver"
@@ -114,6 +116,8 @@ static int xsc_eth_netdev_init(struct xsc_adapter *adapter)
if (!adapter->txq2sq)
goto err_out;
+ mutex_init(&adapter->status_lock);
+
adapter->workq = create_singlethread_workqueue("xsc_eth");
if (!adapter->workq)
goto err_free_priv;
@@ -128,11 +132,1508 @@ static int xsc_eth_netdev_init(struct xsc_adapter *adapter)
return -ENOMEM;
}
-static int xsc_eth_close(struct net_device *netdev)
+static void xsc_eth_build_queue_param(struct xsc_adapter *adapter,
+ struct xsc_queue_attr *attr, u8 type)
+{
+ struct xsc_core_device *xdev = adapter->xdev;
+
+ if (adapter->nic_param.sq_size == 0)
+ adapter->nic_param.sq_size = BIT(xdev->caps.log_max_qp_depth);
+ if (adapter->nic_param.rq_size == 0)
+ adapter->nic_param.rq_size = BIT(xdev->caps.log_max_qp_depth);
+
+ if (type == XSC_QUEUE_TYPE_EQ) {
+ attr->q_type = XSC_QUEUE_TYPE_EQ;
+ attr->ele_num = XSC_EQ_ELE_NUM;
+ attr->ele_size = XSC_EQ_ELE_SZ;
+ attr->ele_log_size = order_base_2(XSC_EQ_ELE_SZ);
+ attr->q_log_size = order_base_2(XSC_EQ_ELE_NUM);
+ } else if (type == XSC_QUEUE_TYPE_RQCQ) {
+ attr->q_type = XSC_QUEUE_TYPE_RQCQ;
+ attr->ele_num = min_t(int, XSC_RQCQ_ELE_NUM, xdev->caps.max_cqes);
+ attr->ele_size = XSC_RQCQ_ELE_SZ;
+ attr->ele_log_size = order_base_2(XSC_RQCQ_ELE_SZ);
+ attr->q_log_size = order_base_2(attr->ele_num);
+ } else if (type == XSC_QUEUE_TYPE_SQCQ) {
+ attr->q_type = XSC_QUEUE_TYPE_SQCQ;
+ attr->ele_num = min_t(int, XSC_SQCQ_ELE_NUM, xdev->caps.max_cqes);
+ attr->ele_size = XSC_SQCQ_ELE_SZ;
+ attr->ele_log_size = order_base_2(XSC_SQCQ_ELE_SZ);
+ attr->q_log_size = order_base_2(attr->ele_num);
+ } else if (type == XSC_QUEUE_TYPE_RQ) {
+ attr->q_type = XSC_QUEUE_TYPE_RQ;
+ attr->ele_num = adapter->nic_param.rq_size;
+ attr->ele_size = xdev->caps.recv_ds_num * XSC_RECV_WQE_DS;
+ attr->ele_log_size = order_base_2(attr->ele_size);
+ attr->q_log_size = order_base_2(attr->ele_num);
+ } else if (type == XSC_QUEUE_TYPE_SQ) {
+ attr->q_type = XSC_QUEUE_TYPE_SQ;
+ attr->ele_num = adapter->nic_param.sq_size;
+ attr->ele_size = xdev->caps.send_ds_num * XSC_SEND_WQE_DS;
+ attr->ele_log_size = order_base_2(attr->ele_size);
+ attr->q_log_size = order_base_2(attr->ele_num);
+ }
+}
+
+static u32 xsc_rx_get_linear_frag_sz(u32 mtu)
+{
+ u32 byte_count = XSC_SW2HW_FRAG_SIZE(mtu);
+
+ return XSC_SKB_FRAG_SZ(byte_count);
+}
+
+static bool xsc_rx_is_linear_skb(u32 mtu)
+{
+ u32 linear_frag_sz = xsc_rx_get_linear_frag_sz(mtu);
+
+ return linear_frag_sz <= PAGE_SIZE;
+}
+
+static u32 xsc_get_rq_frag_info(struct xsc_rq_frags_info *frags_info, u32 mtu)
+{
+ u32 byte_count = XSC_SW2HW_FRAG_SIZE(mtu);
+ int frag_stride;
+ int i = 0;
+
+ if (xsc_rx_is_linear_skb(mtu)) {
+ frag_stride = xsc_rx_get_linear_frag_sz(mtu);
+ frag_stride = roundup_pow_of_two(frag_stride);
+
+ frags_info->arr[0].frag_size = byte_count;
+ frags_info->arr[0].frag_stride = frag_stride;
+ frags_info->num_frags = 1;
+ frags_info->wqe_bulk = PAGE_SIZE / frag_stride;
+ frags_info->wqe_bulk_min = frags_info->wqe_bulk;
+ goto out;
+ }
+
+ if (byte_count <= DEFAULT_FRAG_SIZE) {
+ frags_info->arr[0].frag_size = DEFAULT_FRAG_SIZE;
+ frags_info->arr[0].frag_stride = DEFAULT_FRAG_SIZE;
+ frags_info->num_frags = 1;
+ } else if (byte_count <= PAGE_SIZE_4K) {
+ frags_info->arr[0].frag_size = PAGE_SIZE_4K;
+ frags_info->arr[0].frag_stride = PAGE_SIZE_4K;
+ frags_info->num_frags = 1;
+ } else if (byte_count <= (PAGE_SIZE_4K + DEFAULT_FRAG_SIZE)) {
+ if (PAGE_SIZE < 2 * PAGE_SIZE_4K) {
+ frags_info->arr[0].frag_size = PAGE_SIZE_4K;
+ frags_info->arr[0].frag_stride = PAGE_SIZE_4K;
+ frags_info->arr[1].frag_size = PAGE_SIZE_4K;
+ frags_info->arr[1].frag_stride = PAGE_SIZE_4K;
+ frags_info->num_frags = 2;
+ } else {
+ frags_info->arr[0].frag_size = 2 * PAGE_SIZE_4K;
+ frags_info->arr[0].frag_stride = 2 * PAGE_SIZE_4K;
+ frags_info->num_frags = 1;
+ }
+ } else if (byte_count <= 2 * PAGE_SIZE_4K) {
+ if (PAGE_SIZE < 2 * PAGE_SIZE_4K) {
+ frags_info->arr[0].frag_size = PAGE_SIZE_4K;
+ frags_info->arr[0].frag_stride = PAGE_SIZE_4K;
+ frags_info->arr[1].frag_size = PAGE_SIZE_4K;
+ frags_info->arr[1].frag_stride = PAGE_SIZE_4K;
+ frags_info->num_frags = 2;
+ } else {
+ frags_info->arr[0].frag_size = 2 * PAGE_SIZE_4K;
+ frags_info->arr[0].frag_stride = 2 * PAGE_SIZE_4K;
+ frags_info->num_frags = 1;
+ }
+ } else {
+ if (PAGE_SIZE < 4 * PAGE_SIZE_4K) {
+ frags_info->num_frags = roundup(byte_count, PAGE_SIZE_4K) / PAGE_SIZE_4K;
+ for (i = 0; i < frags_info->num_frags; i++) {
+ frags_info->arr[i].frag_size = PAGE_SIZE_4K;
+ frags_info->arr[i].frag_stride = PAGE_SIZE_4K;
+ }
+ } else {
+ frags_info->arr[0].frag_size = 4 * PAGE_SIZE_4K;
+ frags_info->arr[0].frag_stride = 4 * PAGE_SIZE_4K;
+ frags_info->num_frags = 1;
+ }
+ }
+
+ if (PAGE_SIZE <= PAGE_SIZE_4K) {
+ frags_info->wqe_bulk_min = 4;
+ frags_info->wqe_bulk = max_t(u8, frags_info->wqe_bulk_min, 8);
+ } else if (PAGE_SIZE <= 2 * PAGE_SIZE_4K) {
+ frags_info->wqe_bulk = 2;
+ frags_info->wqe_bulk_min = frags_info->wqe_bulk;
+ } else {
+ frags_info->wqe_bulk =
+ PAGE_SIZE / (frags_info->num_frags * frags_info->arr[0].frag_size);
+ frags_info->wqe_bulk_min = frags_info->wqe_bulk;
+ }
+
+out:
+ frags_info->log_num_frags = order_base_2(frags_info->num_frags);
+
+ return frags_info->num_frags * frags_info->arr[0].frag_size;
+}
+
+static void xsc_build_rq_frags_info(struct xsc_queue_attr *attr,
+ struct xsc_rq_frags_info *frags_info,
+ struct xsc_eth_params *params)
+{
+ params->rq_frags_size = xsc_get_rq_frag_info(frags_info, params->mtu);
+ frags_info->frags_max_num = attr->ele_size / XSC_RECV_WQE_DS;
+}
+
+static void xsc_eth_build_channel_param(struct xsc_adapter *adapter,
+ struct xsc_channel_param *chl_param)
+{
+ xsc_eth_build_queue_param(adapter, &chl_param->rqcq_param.cq_attr,
+ XSC_QUEUE_TYPE_RQCQ);
+ chl_param->rqcq_param.wq.buf_numa_node = dev_to_node(adapter->dev);
+
+ xsc_eth_build_queue_param(adapter, &chl_param->sqcq_param.cq_attr,
+ XSC_QUEUE_TYPE_SQCQ);
+ chl_param->sqcq_param.wq.buf_numa_node = dev_to_node(adapter->dev);
+
+ xsc_eth_build_queue_param(adapter, &chl_param->sq_param.sq_attr,
+ XSC_QUEUE_TYPE_SQ);
+ chl_param->sq_param.wq.buf_numa_node = dev_to_node(adapter->dev);
+
+ xsc_eth_build_queue_param(adapter, &chl_param->rq_param.rq_attr,
+ XSC_QUEUE_TYPE_RQ);
+ chl_param->rq_param.wq.buf_numa_node = dev_to_node(adapter->dev);
+
+ xsc_build_rq_frags_info(&chl_param->rq_param.rq_attr,
+ &chl_param->rq_param.frags_info,
+ &adapter->nic_param);
+}
+
+static void xsc_eth_cq_error_event(struct xsc_core_cq *xcq, enum xsc_event event)
+{
+ struct xsc_cq *xsc_cq = container_of(xcq, struct xsc_cq, xcq);
+ struct xsc_core_device *xdev = xsc_cq->xdev;
+
+ if (event != XSC_EVENT_TYPE_CQ_ERROR) {
+ xsc_core_err(xdev, "Unexpected event type %d on CQ %06x\n",
+ event, xcq->cqn);
+ return;
+ }
+
+ xsc_core_err(xdev, "Eth catch CQ ERROR:%x, cqn: %d\n", event, xcq->cqn);
+}
+
+static void xsc_eth_completion_event(struct xsc_core_cq *xcq)
+{
+ struct xsc_cq *cq = container_of(xcq, struct xsc_cq, xcq);
+ struct xsc_core_device *xdev = cq->xdev;
+ struct xsc_rq *rq = NULL;
+
+ if (unlikely(!cq->channel)) {
+ xsc_core_warn(xdev, "cq%d->channel is null\n", xcq->cqn);
+ return;
+ }
+
+ rq = &cq->channel->qp.rq[0];
+
+ set_bit(XSC_CHANNEL_NAPI_SCHED, &cq->channel->flags);
+
+ if (!test_bit(XSC_ETH_RQ_STATE_ENABLED, &rq->state))
+ xsc_core_warn(xdev, "ch%d_cq%d, napi_flag=0x%lx\n",
+ cq->channel->chl_idx, xcq->cqn, cq->napi->state);
+
+ napi_schedule(cq->napi);
+ cq->event_ctr++;
+}
+
+static int xsc_eth_alloc_cq(struct xsc_channel *c, struct xsc_cq *pcq,
+ struct xsc_cq_param *pcq_param)
+{
+ int ret;
+ struct xsc_core_device *xdev = c->adapter->xdev;
+ struct xsc_core_cq *core_cq = &pcq->xcq;
+ u32 i;
+ u8 q_log_size = pcq_param->cq_attr.q_log_size;
+ u8 ele_log_size = pcq_param->cq_attr.ele_log_size;
+
+ pcq_param->wq.db_numa_node = cpu_to_node(c->cpu);
+ pcq_param->wq.buf_numa_node = cpu_to_node(c->cpu);
+
+ ret = xsc_eth_cqwq_create(xdev, &pcq_param->wq,
+ q_log_size, ele_log_size, &pcq->wq,
+ &pcq->wq_ctrl);
+ if (ret)
+ return ret;
+
+ core_cq->cqe_sz = pcq_param->cq_attr.ele_num;
+ core_cq->comp = xsc_eth_completion_event;
+ core_cq->event = xsc_eth_cq_error_event;
+ core_cq->vector = c->chl_idx;
+
+ for (i = 0; i < xsc_cqwq_get_size(&pcq->wq); i++) {
+ struct xsc_cqe *cqe = xsc_cqwq_get_wqe(&pcq->wq, i);
+
+ cqe->owner = 1;
+ }
+ pcq->xdev = xdev;
+
+ return ret;
+}
+
+static int xsc_eth_set_cq(struct xsc_channel *c,
+ struct xsc_cq *pcq,
+ struct xsc_cq_param *pcq_param)
+{
+ int ret = XSCALE_RET_SUCCESS;
+ struct xsc_core_device *xdev = c->adapter->xdev;
+ struct xsc_create_cq_mbox_in *in;
+ int inlen;
+ int eqn, irqn;
+ int hw_npages;
+
+ hw_npages = DIV_ROUND_UP(pcq->wq_ctrl.buf.size, PAGE_SIZE_4K);
+ /*mbox size + pas size*/
+ inlen = sizeof(struct xsc_create_cq_mbox_in) +
+ sizeof(__be64) * hw_npages;
+
+ in = kvzalloc(inlen, GFP_KERNEL);
+ if (!in)
+ return -ENOMEM;
+
+ /*construct param of in struct*/
+ ret = xsc_core_vector2eqn(xdev, c->chl_idx, &eqn, &irqn);
+ if (ret)
+ goto err;
+
+ in->ctx.eqn = eqn;
+ in->ctx.eqn = cpu_to_be16(in->ctx.eqn);
+ in->ctx.log_cq_sz = pcq_param->cq_attr.q_log_size;
+ in->ctx.pa_num = cpu_to_be16(hw_npages);
+ in->ctx.glb_func_id = cpu_to_be16(xdev->glb_func_id);
+
+ xsc_core_fill_page_frag_array(&pcq->wq_ctrl.buf, &in->pas[0], hw_npages);
+
+ ret = xsc_core_eth_create_cq(c->adapter->xdev, &pcq->xcq, in, inlen);
+ if (ret == 0) {
+ pcq->xcq.irqn = irqn;
+ pcq->xcq.eq = xsc_core_eq_get(xdev, pcq->xcq.vector);
+ }
+
+err:
+ kvfree(in);
+ xsc_core_info(c->adapter->xdev, "create ch%d cqn%d, eqn=%d, func_id=%d, ret=%d\n",
+ c->chl_idx, pcq->xcq.cqn, eqn, xdev->glb_func_id, ret);
+ return ret;
+}
+
+static void xsc_eth_free_cq(struct xsc_cq *cq)
+{
+ xsc_eth_wq_destroy(&cq->wq_ctrl);
+}
+
+static int xsc_eth_open_cq(struct xsc_channel *c,
+ struct xsc_cq *pcq,
+ struct xsc_cq_param *pcq_param)
+{
+ int ret;
+
+ ret = xsc_eth_alloc_cq(c, pcq, pcq_param);
+ if (ret)
+ return ret;
+
+ ret = xsc_eth_set_cq(c, pcq, pcq_param);
+ if (ret)
+ goto err_set_cq;
+
+ xsc_cq_notify_hw_rearm(pcq);
+
+ pcq->napi = &c->napi;
+ pcq->channel = c;
+ pcq->rx = (pcq_param->cq_attr.q_type == XSC_QUEUE_TYPE_RQCQ) ? 1 : 0;
+
+ return 0;
+
+err_set_cq:
+ xsc_eth_free_cq(pcq);
+ return ret;
+}
+
+static int xsc_eth_close_cq(struct xsc_channel *c, struct xsc_cq *pcq)
+{
+ int ret;
+ struct xsc_core_device *xdev = c->adapter->xdev;
+
+ ret = xsc_core_eth_destroy_cq(xdev, &pcq->xcq);
+ if (ret) {
+ xsc_core_warn(xdev, "failed to close ch%d cq%d, ret=%d\n",
+ c->chl_idx, pcq->xcq.cqn, ret);
+ return ret;
+ }
+
+ xsc_eth_free_cq(pcq);
+
+ return 0;
+}
+
+static void xsc_free_qp_sq_db(struct xsc_sq *sq)
+{
+ kvfree(sq->db.wqe_info);
+ kvfree(sq->db.dma_fifo);
+}
+
+static void xsc_free_qp_sq(struct xsc_sq *sq)
+{
+ xsc_free_qp_sq_db(sq);
+ xsc_eth_wq_destroy(&sq->wq_ctrl);
+}
+
+static int xsc_eth_alloc_qp_sq_db(struct xsc_sq *sq, int numa)
+{
+ int wq_sz = xsc_wq_cyc_get_size(&sq->wq);
+ struct xsc_core_device *xdev = sq->cq.xdev;
+ int df_sz = wq_sz * xdev->caps.send_ds_num;
+
+ sq->db.dma_fifo = kvzalloc_node(array_size(df_sz, sizeof(*sq->db.dma_fifo)),
+ GFP_KERNEL, numa);
+ sq->db.wqe_info = kvzalloc_node(array_size(wq_sz, sizeof(*sq->db.wqe_info)),
+ GFP_KERNEL, numa);
+
+ if (!sq->db.dma_fifo || !sq->db.wqe_info) {
+ xsc_free_qp_sq_db(sq);
+ return -ENOMEM;
+ }
+
+ sq->dma_fifo_mask = df_sz - 1;
+
+ return 0;
+}
+
+static void xsc_eth_qp_event(struct xsc_core_qp *qp, int type)
+{
+ struct xsc_rq *rq;
+ struct xsc_sq *sq;
+ struct xsc_core_device *xdev;
+
+ if (qp->eth_queue_type == XSC_RES_RQ) {
+ rq = container_of(qp, struct xsc_rq, cqp);
+ xdev = rq->cq.xdev;
+ } else if (qp->eth_queue_type == XSC_RES_SQ) {
+ sq = container_of(qp, struct xsc_sq, cqp);
+ xdev = sq->cq.xdev;
+ } else {
+ pr_err("%s:Unknown eth qp type %d\n", __func__, type);
+ return;
+ }
+
+ switch (type) {
+ case XSC_EVENT_TYPE_WQ_CATAS_ERROR:
+ case XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR:
+ case XSC_EVENT_TYPE_WQ_ACCESS_ERROR:
+ xsc_core_err(xdev, "%s:Async event %x on QP %d\n", __func__, type, qp->qpn);
+ break;
+ default:
+ xsc_core_err(xdev, "%s: Unexpected event type %d on QP %d\n",
+ __func__, type, qp->qpn);
+ return;
+ }
+}
+
+static int xsc_eth_open_qp_sq(struct xsc_channel *c,
+ struct xsc_sq *psq,
+ struct xsc_sq_param *psq_param,
+ u32 sq_idx)
+{
+ struct xsc_adapter *adapter = c->adapter;
+ struct xsc_core_device *xdev = adapter->xdev;
+ u8 q_log_size = psq_param->sq_attr.q_log_size;
+ u8 ele_log_size = psq_param->sq_attr.ele_log_size;
+ struct xsc_create_qp_mbox_in *in;
+ struct xsc_modify_raw_qp_mbox_in *modify_in;
+ int hw_npages;
+ int inlen;
+ int ret;
+
+ psq_param->wq.db_numa_node = cpu_to_node(c->cpu);
+
+ ret = xsc_eth_wq_cyc_create(xdev, &psq_param->wq,
+ q_log_size, ele_log_size, &psq->wq,
+ &psq->wq_ctrl);
+ if (ret)
+ return ret;
+
+ hw_npages = DIV_ROUND_UP(psq->wq_ctrl.buf.size, PAGE_SIZE_4K);
+ inlen = sizeof(struct xsc_create_qp_mbox_in) +
+ sizeof(__be64) * hw_npages;
+
+ in = kvzalloc(inlen, GFP_KERNEL);
+ if (!in) {
+ ret = -ENOMEM;
+ goto err_sq_wq_destroy;
+ }
+ in->req.input_qpn = cpu_to_be16(XSC_QPN_SQN_STUB); /*no use for eth*/
+ in->req.qp_type = XSC_QUEUE_TYPE_RAW_TSO; /*default sq is tso qp*/
+ in->req.log_sq_sz = ilog2(xdev->caps.send_ds_num) + q_log_size;
+ in->req.pa_num = cpu_to_be16(hw_npages);
+ in->req.cqn_send = cpu_to_be16(psq->cq.xcq.cqn);
+ in->req.cqn_recv = in->req.cqn_send;
+ in->req.glb_funcid = cpu_to_be16(xdev->glb_func_id);
+
+ xsc_core_fill_page_frag_array(&psq->wq_ctrl.buf,
+ &in->req.pas[0], hw_npages);
+
+ ret = xsc_core_eth_create_qp(xdev, in, inlen, &psq->sqn);
+ if (ret)
+ goto err_sq_in_destroy;
+
+ psq->cqp.qpn = psq->sqn;
+ psq->cqp.event = xsc_eth_qp_event;
+ psq->cqp.eth_queue_type = XSC_RES_SQ;
+
+ ret = xsc_core_create_resource_common(xdev, &psq->cqp);
+ if (ret) {
+ xsc_core_err(xdev, "%s:error qp:%d errno:%d\n",
+ __func__, psq->sqn, ret);
+ goto err_sq_destroy;
+ }
+
+ psq->channel = c;
+ psq->ch_ix = c->chl_idx;
+ psq->txq_ix = psq->ch_ix + sq_idx * adapter->channels.num_chl;
+
+ /*need to querify from hardware*/
+ psq->hw_mtu = XSC_ETH_HW_MTU_SEND;
+ psq->stop_room = 1;
+
+ ret = xsc_eth_alloc_qp_sq_db(psq, psq_param->wq.db_numa_node);
+ if (ret)
+ goto err_sq_common_destroy;
+
+ inlen = sizeof(struct xsc_modify_raw_qp_mbox_in);
+ modify_in = kvzalloc(inlen, GFP_KERNEL);
+ if (!modify_in) {
+ ret = -ENOMEM;
+ goto err_sq_common_destroy;
+ }
+
+ modify_in->req.qp_out_port = xdev->pf_id;
+ modify_in->pcie_no = xdev->pcie_no;
+ modify_in->req.qpn = cpu_to_be16((u16)(psq->sqn));
+ modify_in->req.func_id = cpu_to_be16(xdev->glb_func_id);
+ modify_in->req.dma_direct = DMA_DIR_TO_MAC;
+ modify_in->req.prio = sq_idx;
+ ret = xsc_core_eth_modify_raw_qp(xdev, modify_in);
+ if (ret)
+ goto err_sq_modify_in_destroy;
+
+ kvfree(modify_in);
+ kvfree(in);
+
+ xsc_core_info(c->adapter->xdev,
+ "open sq ok, ch%d_sq%d_qpn=%d, state=0x%lx, db_numa=%d, buf_numa=%d\n",
+ c->chl_idx, sq_idx, psq->sqn, psq->state,
+ psq_param->wq.db_numa_node, psq_param->wq.buf_numa_node);
+
+ return 0;
+
+err_sq_modify_in_destroy:
+ kvfree(modify_in);
+
+err_sq_common_destroy:
+ xsc_core_destroy_resource_common(xdev, &psq->cqp);
+
+err_sq_destroy:
+ xsc_core_eth_destroy_qp(xdev, psq->cqp.qpn);
+
+err_sq_in_destroy:
+ kvfree(in);
+
+err_sq_wq_destroy:
+ xsc_eth_wq_destroy(&psq->wq_ctrl);
+ return ret;
+}
+
+static int xsc_eth_close_qp_sq(struct xsc_channel *c, struct xsc_sq *psq)
+{
+ struct xsc_core_device *xdev = c->adapter->xdev;
+ int ret;
+
+ xsc_core_destroy_resource_common(xdev, &psq->cqp);
+
+ ret = xsc_core_eth_destroy_qp(xdev, psq->cqp.qpn);
+ if (ret)
+ return ret;
+
+ xsc_free_qp_sq(psq);
+
+ return 0;
+}
+
+static int xsc_eth_open_channel(struct xsc_adapter *adapter,
+ int idx,
+ struct xsc_channel *c,
+ struct xsc_channel_param *chl_param)
+{
+ int ret = 0;
+ struct net_device *netdev = adapter->netdev;
+ struct xsc_core_device *xdev = adapter->xdev;
+ int i, j, eqn, irqn;
+ const struct cpumask *aff;
+
+ c->adapter = adapter;
+ c->netdev = adapter->netdev;
+ c->chl_idx = idx;
+ c->num_tc = adapter->nic_param.num_tc;
+
+ /*1rq per channel, and may have multi sqs per channel*/
+ c->qp.rq_num = 1;
+ c->qp.sq_num = c->num_tc;
+
+ if (xdev->caps.msix_enable) {
+ ret = xsc_core_vector2eqn(xdev, c->chl_idx, &eqn, &irqn);
+ if (ret)
+ goto err;
+ aff = irq_get_affinity_mask(irqn);
+ c->aff_mask = aff;
+ c->cpu = cpumask_first(aff);
+ }
+
+ if (c->qp.sq_num > XSC_MAX_NUM_TC || c->qp.rq_num > XSC_MAX_NUM_TC) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ for (i = 0; i < c->qp.rq_num; i++) {
+ ret = xsc_eth_open_cq(c, &c->qp.rq[i].cq, &chl_param->rqcq_param);
+ if (ret) {
+ j = i - 1;
+ goto err_open_rq_cq;
+ }
+ }
+
+ for (i = 0; i < c->qp.sq_num; i++) {
+ ret = xsc_eth_open_cq(c, &c->qp.sq[i].cq, &chl_param->sqcq_param);
+ if (ret) {
+ j = i - 1;
+ goto err_open_sq_cq;
+ }
+ }
+
+ for (i = 0; i < c->qp.sq_num; i++) {
+ ret = xsc_eth_open_qp_sq(c, &c->qp.sq[i], &chl_param->sq_param, i);
+ if (ret) {
+ j = i - 1;
+ goto err_open_sq;
+ }
+ }
+ netif_napi_add(netdev, &c->napi, xsc_eth_napi_poll);
+
+ xsc_core_dbg(adapter->xdev, "open channel%d ok\n", idx);
+ return 0;
+
+err_open_sq:
+ for (; j >= 0; j--)
+ xsc_eth_close_qp_sq(c, &c->qp.sq[j]);
+ j = (c->qp.rq_num - 1);
+err_open_sq_cq:
+ for (; j >= 0; j--)
+ xsc_eth_close_cq(c, &c->qp.sq[j].cq);
+ j = (c->qp.rq_num - 1);
+err_open_rq_cq:
+ for (; j >= 0; j--)
+ xsc_eth_close_cq(c, &c->qp.rq[j].cq);
+err:
+ xsc_core_warn(adapter->xdev,
+ "failed to open channel: ch%d, sq_num=%d, rq_num=%d, err=%d\n",
+ idx, c->qp.sq_num, c->qp.rq_num, ret);
+ return ret;
+}
+
+static int xsc_eth_modify_qps_channel(struct xsc_adapter *adapter, struct xsc_channel *c)
+{
+ int ret = 0;
+ int i;
+
+ for (i = 0; i < c->qp.rq_num; i++) {
+ c->qp.rq[i].post_wqes(&c->qp.rq[i]);
+ ret = xsc_core_eth_modify_qp_status(adapter->xdev, c->qp.rq[i].rqn,
+ XSC_CMD_OP_RTR2RTS_QP);
+ if (ret)
+ return ret;
+ }
+
+ for (i = 0; i < c->qp.sq_num; i++) {
+ ret = xsc_core_eth_modify_qp_status(adapter->xdev, c->qp.sq[i].sqn,
+ XSC_CMD_OP_RTR2RTS_QP);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+static int xsc_eth_modify_qps(struct xsc_adapter *adapter,
+ struct xsc_eth_channels *chls)
+{
+ int ret;
+ int i;
+
+ for (i = 0; i < chls->num_chl; i++) {
+ struct xsc_channel *c = &chls->c[i];
+
+ ret = xsc_eth_modify_qps_channel(adapter, c);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void xsc_eth_init_frags_partition(struct xsc_rq *rq)
+{
+ struct xsc_wqe_frag_info next_frag = {};
+ struct xsc_wqe_frag_info *prev;
+ int i;
+
+ next_frag.di = &rq->wqe.di[0];
+ next_frag.offset = 0;
+ prev = NULL;
+
+ for (i = 0; i < xsc_wq_cyc_get_size(&rq->wqe.wq); i++) {
+ struct xsc_rq_frag_info *frag_info = &rq->wqe.info.arr[0];
+ struct xsc_wqe_frag_info *frag =
+ &rq->wqe.frags[i << rq->wqe.info.log_num_frags];
+ int f;
+
+ for (f = 0; f < rq->wqe.info.num_frags; f++, frag++) {
+ if (next_frag.offset + frag_info[f].frag_stride >
+ XSC_RX_FRAG_SZ) {
+ next_frag.di++;
+ next_frag.offset = 0;
+ if (prev)
+ prev->last_in_page = 1;
+ }
+ *frag = next_frag;
+
+ /* prepare next */
+ next_frag.offset += frag_info[f].frag_stride;
+ prev = frag;
+ }
+ }
+
+ if (prev)
+ prev->last_in_page = 1;
+}
+
+static int xsc_eth_init_di_list(struct xsc_rq *rq, int wq_sz, int cpu)
+{
+ int len = wq_sz << rq->wqe.info.log_num_frags;
+
+ rq->wqe.di = kvzalloc_node(array_size(len, sizeof(*rq->wqe.di)),
+ GFP_KERNEL, cpu_to_node(cpu));
+ if (!rq->wqe.di)
+ return -ENOMEM;
+
+ xsc_eth_init_frags_partition(rq);
+
+ return 0;
+}
+
+static void xsc_eth_free_di_list(struct xsc_rq *rq)
+{
+ kvfree(rq->wqe.di);
+}
+
+static int xsc_eth_alloc_rq(struct xsc_channel *c,
+ struct xsc_rq *prq,
+ struct xsc_rq_param *prq_param)
+{
+ struct xsc_adapter *adapter = c->adapter;
+ u8 q_log_size = prq_param->rq_attr.q_log_size;
+ struct page_pool_params pagepool_params = { 0 };
+ u32 pool_size = 1 << q_log_size;
+ u8 ele_log_size = prq_param->rq_attr.ele_log_size;
+ int wq_sz;
+ int i, f;
+ int ret = 0;
+
+ prq_param->wq.db_numa_node = cpu_to_node(c->cpu);
+
+ ret = xsc_eth_wq_cyc_create(c->adapter->xdev, &prq_param->wq,
+ q_log_size, ele_log_size, &prq->wqe.wq,
+ &prq->wq_ctrl);
+ if (ret)
+ return ret;
+
+ wq_sz = xsc_wq_cyc_get_size(&prq->wqe.wq);
+
+ prq->wqe.info = prq_param->frags_info;
+ prq->wqe.frags = kvzalloc_node(array_size((wq_sz << prq->wqe.info.log_num_frags),
+ sizeof(*prq->wqe.frags)),
+ GFP_KERNEL,
+ cpu_to_node(c->cpu));
+ if (!prq->wqe.frags) {
+ ret = -ENOMEM;
+ goto err_alloc_frags;
+ }
+
+ ret = xsc_eth_init_di_list(prq, wq_sz, c->cpu);
+ if (ret)
+ goto err_init_di;
+
+ prq->buff.map_dir = DMA_FROM_DEVICE;
+
+ /* Create a page_pool and register it with rxq */
+ pool_size = wq_sz << prq->wqe.info.log_num_frags;
+ pagepool_params.order = XSC_RX_FRAG_SZ_ORDER;
+ pagepool_params.flags = 0; /* No-internal DMA mapping in page_pool */
+ pagepool_params.pool_size = pool_size;
+ pagepool_params.nid = cpu_to_node(c->cpu);
+ pagepool_params.dev = c->adapter->dev;
+ pagepool_params.dma_dir = prq->buff.map_dir;
+
+ prq->page_pool = page_pool_create(&pagepool_params);
+ if (IS_ERR(prq->page_pool)) {
+ ret = PTR_ERR(prq->page_pool);
+ prq->page_pool = NULL;
+ goto err_create_pool;
+ }
+
+ if (c->chl_idx == 0)
+ xsc_core_dbg(adapter->xdev,
+ "page pool: size=%d, cpu=%d, pool_numa=%d, mtu=%d, wqe_numa=%d\n",
+ pool_size, c->cpu, pagepool_params.nid,
+ adapter->nic_param.mtu,
+ prq_param->wq.buf_numa_node);
+
+ for (i = 0; i < wq_sz; i++) {
+ struct xsc_eth_rx_wqe_cyc *wqe =
+ xsc_wq_cyc_get_wqe(&prq->wqe.wq, i);
+
+ for (f = 0; f < prq->wqe.info.num_frags; f++) {
+ u32 frag_size = prq->wqe.info.arr[f].frag_size;
+
+ wqe->data[f].seg_len = cpu_to_le32(frag_size);
+ wqe->data[f].mkey = cpu_to_le32(XSC_INVALID_LKEY);
+ }
+
+ for (; f < prq->wqe.info.frags_max_num; f++) {
+ wqe->data[f].seg_len = 0;
+ wqe->data[f].mkey = cpu_to_le32(XSC_INVALID_LKEY);
+ wqe->data[f].va = 0;
+ }
+ }
+
+ prq->post_wqes = xsc_eth_post_rx_wqes;
+ prq->handle_rx_cqe = xsc_eth_handle_rx_cqe;
+ prq->dealloc_wqe = xsc_eth_dealloc_rx_wqe;
+ prq->wqe.skb_from_cqe = xsc_rx_is_linear_skb(adapter->nic_param.mtu) ?
+ xsc_skb_from_cqe_linear :
+ xsc_skb_from_cqe_nonlinear;
+ prq->ix = c->chl_idx;
+ prq->frags_sz = adapter->nic_param.rq_frags_size;
+
+ return 0;
+
+err_create_pool:
+ xsc_eth_free_di_list(prq);
+err_init_di:
+ kvfree(prq->wqe.frags);
+err_alloc_frags:
+ xsc_eth_wq_destroy(&prq->wq_ctrl);
+ return ret;
+}
+
+static void xsc_free_qp_rq(struct xsc_rq *rq)
+{
+ kvfree(rq->wqe.frags);
+ kvfree(rq->wqe.di);
+
+ if (rq->page_pool)
+ page_pool_destroy(rq->page_pool);
+
+ xsc_eth_wq_destroy(&rq->wq_ctrl);
+}
+
+static int xsc_eth_open_rss_qp_rqs(struct xsc_adapter *adapter,
+ struct xsc_rq_param *prq_param,
+ struct xsc_eth_channels *chls,
+ unsigned int num_chl)
+{
+ int ret = 0, err = 0;
+ struct xsc_create_multiqp_mbox_in *in;
+ struct xsc_create_qp_request *req;
+ u8 q_log_size = prq_param->rq_attr.q_log_size;
+ int paslen = 0;
+ struct xsc_rq *prq;
+ struct xsc_channel *c;
+ int rqn_base;
+ int inlen;
+ int entry_len;
+ int i, j, n;
+ int hw_npages;
+
+ for (i = 0; i < num_chl; i++) {
+ c = &chls->c[i];
+
+ for (j = 0; j < c->qp.rq_num; j++) {
+ prq = &c->qp.rq[j];
+ ret = xsc_eth_alloc_rq(c, prq, prq_param);
+ if (ret)
+ goto err_alloc_rqs;
+
+ hw_npages = DIV_ROUND_UP(prq->wq_ctrl.buf.size, PAGE_SIZE_4K);
+ /*support different npages number smoothly*/
+ entry_len = sizeof(struct xsc_create_qp_request) +
+ sizeof(__be64) * hw_npages;
+
+ paslen += entry_len;
+ }
+ }
+
+ inlen = sizeof(struct xsc_create_multiqp_mbox_in) + paslen;
+ in = kvzalloc(inlen, GFP_KERNEL);
+ if (!in) {
+ ret = -ENOMEM;
+ goto err_create_rss_rqs;
+ }
+
+ in->qp_num = cpu_to_be16(num_chl);
+ in->qp_type = XSC_QUEUE_TYPE_RAW;
+ in->req_len = cpu_to_be32(inlen);
+
+ req = (struct xsc_create_qp_request *)&in->data[0];
+ n = 0;
+ for (i = 0; i < num_chl; i++) {
+ c = &chls->c[i];
+ for (j = 0; j < c->qp.rq_num; j++) {
+ prq = &c->qp.rq[j];
+
+ hw_npages = DIV_ROUND_UP(prq->wq_ctrl.buf.size, PAGE_SIZE_4K);
+ /* no use for eth */
+ req->input_qpn = cpu_to_be16(0);
+ req->qp_type = XSC_QUEUE_TYPE_RAW;
+ req->log_rq_sz = ilog2(adapter->xdev->caps.recv_ds_num) +
+ q_log_size;
+ req->pa_num = cpu_to_be16(hw_npages);
+ req->cqn_recv = cpu_to_be16(prq->cq.xcq.cqn);
+ req->cqn_send = req->cqn_recv;
+ req->glb_funcid = cpu_to_be16(adapter->xdev->glb_func_id);
+
+ xsc_core_fill_page_frag_array(&prq->wq_ctrl.buf, &req->pas[0], hw_npages);
+ n++;
+ req = (struct xsc_create_qp_request *)(&in->data[0] + entry_len * n);
+ }
+ }
+
+ ret = xsc_core_eth_create_rss_qp_rqs(adapter->xdev, in, inlen, &rqn_base);
+ kvfree(in);
+ if (ret)
+ goto err_create_rss_rqs;
+
+ n = 0;
+ for (i = 0; i < num_chl; i++) {
+ c = &chls->c[i];
+ for (j = 0; j < c->qp.rq_num; j++) {
+ prq = &c->qp.rq[j];
+ prq->rqn = rqn_base + n;
+ prq->cqp.qpn = prq->rqn;
+ prq->cqp.event = xsc_eth_qp_event;
+ prq->cqp.eth_queue_type = XSC_RES_RQ;
+ ret = xsc_core_create_resource_common(adapter->xdev, &prq->cqp);
+ if (ret) {
+ err = ret;
+ xsc_core_err(adapter->xdev,
+ "create resource common error qp:%d errno:%d\n",
+ prq->rqn, ret);
+ continue;
+ }
+
+ n++;
+ }
+ }
+ if (err)
+ return err;
+
+ adapter->channels.rqn_base = rqn_base;
+ xsc_core_info(adapter->xdev, "rqn_base=%d, rq_num=%d, state=0x%lx\n",
+ rqn_base, num_chl, prq->state);
+ return 0;
+
+err_create_rss_rqs:
+ i = num_chl;
+err_alloc_rqs:
+ for (--i; i >= 0; i--) {
+ c = &chls->c[i];
+ for (j = 0; j < c->qp.rq_num; j++) {
+ prq = &c->qp.rq[j];
+ xsc_free_qp_rq(prq);
+ }
+ }
+ return ret;
+}
+
+static void xsc_eth_free_rx_wqe(struct xsc_rq *rq)
+{
+ u16 wqe_ix;
+ struct xsc_wq_cyc *wq = &rq->wqe.wq;
+
+ while (!xsc_wq_cyc_is_empty(wq)) {
+ wqe_ix = xsc_wq_cyc_get_tail(wq);
+ rq->dealloc_wqe(rq, wqe_ix);
+ xsc_wq_cyc_pop(wq);
+ }
+}
+
+static int xsc_eth_close_qp_rq(struct xsc_channel *c, struct xsc_rq *prq)
+{
+ int ret;
+ struct xsc_core_device *xdev = c->adapter->xdev;
+
+ xsc_core_destroy_resource_common(xdev, &prq->cqp);
+
+ ret = xsc_core_eth_destroy_qp(xdev, prq->cqp.qpn);
+ if (ret)
+ return ret;
+
+ xsc_eth_free_rx_wqe(prq);
+ xsc_free_qp_rq(prq);
+
+ return 0;
+}
+
+static void xsc_eth_close_channel(struct xsc_channel *c, bool free_rq)
+{
+ int i;
+
+ for (i = 0; i < c->qp.rq_num; i++) {
+ if (free_rq)
+ xsc_eth_close_qp_rq(c, &c->qp.rq[i]);
+ xsc_eth_close_cq(c, &c->qp.rq[i].cq);
+ memset(&c->qp.rq[i], 0, sizeof(struct xsc_rq));
+ }
+
+ for (i = 0; i < c->qp.sq_num; i++) {
+ xsc_eth_close_qp_sq(c, &c->qp.sq[i]);
+ xsc_eth_close_cq(c, &c->qp.sq[i].cq);
+ }
+
+ netif_napi_del(&c->napi);
+}
+
+static int xsc_eth_open_channels(struct xsc_adapter *adapter)
{
+ int ret = 0;
+ int i;
+ struct xsc_channel_param *chl_param;
+ struct xsc_eth_channels *chls = &adapter->channels;
+ struct xsc_core_device *xdev = adapter->xdev;
+ bool free_rq = false;
+
+ chls->num_chl = adapter->nic_param.num_channels;
+ chls->c = kcalloc_node(chls->num_chl, sizeof(struct xsc_channel),
+ GFP_KERNEL, xdev->priv.numa_node);
+ if (!chls->c) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ chl_param = kvzalloc(sizeof(*chl_param), GFP_KERNEL);
+ if (!chl_param) {
+ ret = -ENOMEM;
+ goto err_free_ch;
+ }
+
+ xsc_eth_build_channel_param(adapter, chl_param);
+
+ for (i = 0; i < chls->num_chl; i++) {
+ ret = xsc_eth_open_channel(adapter, i, &chls->c[i], chl_param);
+ if (ret)
+ goto err_open_channel;
+ }
+
+ ret = xsc_eth_open_rss_qp_rqs(adapter, &chl_param->rq_param, chls, chls->num_chl);
+ if (ret)
+ goto err_open_channel;
+ free_rq = true;
+
+ for (i = 0; i < chls->num_chl; i++)
+ napi_enable(&chls->c[i].napi);
+
+ /* flush cache to memory before interrupt and napi_poll running */
+ smp_wmb();
+
+ ret = xsc_eth_modify_qps(adapter, chls);
+ if (ret)
+ goto err_modify_qps;
+
+ kvfree(chl_param);
+ xsc_core_info(adapter->xdev, "open %d channels ok\n", chls->num_chl);
+ return 0;
+
+err_modify_qps:
+ i = chls->num_chl;
+err_open_channel:
+ for (--i; i >= 0; i--)
+ xsc_eth_close_channel(&chls->c[i], free_rq);
+
+ kvfree(chl_param);
+err_free_ch:
+ kfree(chls->c);
+err:
+ chls->num_chl = 0;
+ xsc_core_warn(adapter->xdev, "failed to open %d channels, err=%d\n",
+ chls->num_chl, ret);
+ return ret;
+}
+
+static void xsc_eth_close_channels(struct xsc_adapter *adapter)
+{
+ int i;
+ struct xsc_channel *c = NULL;
+
+ for (i = 0; i < adapter->channels.num_chl; i++) {
+ c = &adapter->channels.c[i];
+ xsc_core_dbg(adapter->xdev, "start to close channel%d\n", c->chl_idx);
+
+ xsc_eth_close_channel(c, true);
+ }
+
+ kfree(adapter->channels.c);
+ adapter->channels.num_chl = 0;
+}
+
+static void xsc_netdev_set_tcs(struct xsc_adapter *priv, u16 nch, u8 ntc)
+{
+ int tc;
+
+ netdev_reset_tc(priv->netdev);
+
+ if (ntc == 1)
+ return;
+
+ netdev_set_num_tc(priv->netdev, ntc);
+
+ /* Map netdev TCs to offset 0
+ * We have our own UP to TXQ mapping for QoS
+ */
+ for (tc = 0; tc < ntc; tc++)
+ netdev_set_tc_queue(priv->netdev, tc, nch, 0);
+}
+
+static void xsc_eth_build_tx2sq_maps(struct xsc_adapter *adapter)
+{
+ struct xsc_channel *c;
+ struct xsc_sq *psq;
+ int i, tc;
+
+ for (i = 0; i < adapter->channels.num_chl; i++) {
+ c = &adapter->channels.c[i];
+ for (tc = 0; tc < c->num_tc; tc++) {
+ psq = &c->qp.sq[tc];
+ adapter->txq2sq[psq->txq_ix] = psq;
+ }
+ }
+}
+
+static void xsc_eth_activate_txqsq(struct xsc_channel *c)
+{
+ int tc = c->num_tc;
+ struct xsc_sq *psq;
+
+ for (tc = 0; tc < c->num_tc; tc++) {
+ psq = &c->qp.sq[tc];
+ psq->txq = netdev_get_tx_queue(psq->channel->netdev, psq->txq_ix);
+ set_bit(XSC_ETH_SQ_STATE_ENABLED, &psq->state);
+ netdev_tx_reset_queue(psq->txq);
+ netif_tx_start_queue(psq->txq);
+ }
+}
+
+static void xsc_eth_deactivate_txqsq(struct xsc_channel *c)
+{
+ int tc = c->num_tc;
+ struct xsc_sq *psq;
+
+ for (tc = 0; tc < c->num_tc; tc++) {
+ psq = &c->qp.sq[tc];
+ clear_bit(XSC_ETH_SQ_STATE_ENABLED, &psq->state);
+ }
+}
+
+static void xsc_activate_rq(struct xsc_channel *c)
+{
+ int i;
+
+ for (i = 0; i < c->qp.rq_num; i++)
+ set_bit(XSC_ETH_RQ_STATE_ENABLED, &c->qp.rq[i].state);
+}
+
+static void xsc_deactivate_rq(struct xsc_channel *c)
+{
+ int i;
+
+ for (i = 0; i < c->qp.rq_num; i++)
+ clear_bit(XSC_ETH_RQ_STATE_ENABLED, &c->qp.rq[i].state);
+}
+
+static void xsc_eth_activate_channel(struct xsc_channel *c)
+{
+ xsc_eth_activate_txqsq(c);
+ xsc_activate_rq(c);
+}
+
+static void xsc_eth_deactivate_channel(struct xsc_channel *c)
+{
+ xsc_deactivate_rq(c);
+ xsc_eth_deactivate_txqsq(c);
+}
+
+static void xsc_eth_activate_channels(struct xsc_eth_channels *chs)
+{
+ int i;
+
+ for (i = 0; i < chs->num_chl; i++)
+ xsc_eth_activate_channel(&chs->c[i]);
+}
+
+static void xsc_eth_deactivate_channels(struct xsc_eth_channels *chs)
+{
+ int i;
+
+ for (i = 0; i < chs->num_chl; i++)
+ xsc_eth_deactivate_channel(&chs->c[i]);
+
+ /* Sync with all NAPIs to wait until they stop using queues. */
+ synchronize_net();
+
+ for (i = 0; i < chs->num_chl; i++)
+ /* last doorbell out */
+ napi_disable(&chs->c[i].napi);
+}
+
+static void xsc_eth_activate_priv_channels(struct xsc_adapter *adapter)
+{
+ int num_txqs;
+ struct net_device *netdev = adapter->netdev;
+
+ num_txqs = adapter->channels.num_chl * adapter->nic_param.num_tc;
+ xsc_netdev_set_tcs(adapter, adapter->channels.num_chl, adapter->nic_param.num_tc);
+ netif_set_real_num_tx_queues(netdev, num_txqs);
+ netif_set_real_num_rx_queues(netdev, adapter->channels.num_chl);
+
+ xsc_eth_build_tx2sq_maps(adapter);
+ xsc_eth_activate_channels(&adapter->channels);
+ netif_tx_start_all_queues(adapter->netdev);
+}
+
+static void xsc_eth_deactivate_priv_channels(struct xsc_adapter *adapter)
+{
+ netif_tx_disable(adapter->netdev);
+ xsc_eth_deactivate_channels(&adapter->channels);
+}
+
+static int xsc_eth_sw_init(struct xsc_adapter *adapter)
+{
+ int ret;
+
+ ret = xsc_eth_open_channels(adapter);
+ if (ret)
+ return ret;
+
+ xsc_eth_activate_priv_channels(adapter);
+
return 0;
}
+static void xsc_eth_sw_deinit(struct xsc_adapter *adapter)
+{
+ xsc_eth_deactivate_priv_channels(adapter);
+
+ return xsc_eth_close_channels(adapter);
+}
+
+static bool xsc_eth_get_link_status(struct xsc_adapter *adapter)
+{
+ struct xsc_core_device *xdev = adapter->xdev;
+ bool link_up;
+ u16 vport = 0;
+
+ link_up = xsc_core_query_vport_state(xdev, vport);
+
+ xsc_core_dbg(adapter->xdev, "link_status=%d\n", link_up);
+
+ return link_up ? true : false;
+}
+
+static int xsc_eth_change_link_status(struct xsc_adapter *adapter)
+{
+ bool link_up;
+
+ link_up = xsc_eth_get_link_status(adapter);
+
+ if (link_up && !netif_carrier_ok(adapter->netdev)) {
+ netdev_info(adapter->netdev, "Link up\n");
+ netif_carrier_on(adapter->netdev);
+ } else if (!link_up && netif_carrier_ok(adapter->netdev)) {
+ netdev_info(adapter->netdev, "Link down\n");
+ netif_carrier_off(adapter->netdev);
+ }
+
+ return 0;
+}
+
+static void xsc_eth_event_work(struct work_struct *work)
+{
+ int err;
+ struct xsc_event_query_type_mbox_in in;
+ struct xsc_event_query_type_mbox_out out;
+ struct xsc_adapter *adapter = container_of(work, struct xsc_adapter, event_work);
+
+ if (adapter->status != XSCALE_ETH_DRIVER_OK)
+ return;
+
+ /*query cmd_type cmd*/
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_EVENT_TYPE);
+
+ err = xsc_cmd_exec(adapter->xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err || out.hdr.status) {
+ xsc_core_err(adapter->xdev, "failed to query event type, err=%d, status=%d\n",
+ err, out.hdr.status);
+ goto failed;
+ }
+
+ switch (out.ctx.resp_cmd_type) {
+ case XSC_CMD_EVENT_RESP_CHANGE_LINK:
+ err = xsc_eth_change_link_status(adapter);
+ if (err) {
+ xsc_core_err(adapter->xdev, "failed to change linkstatus, err=%d\n", err);
+ goto failed;
+ }
+
+ xsc_core_dbg(adapter->xdev, "event cmdtype=%04x\n", out.ctx.resp_cmd_type);
+ break;
+ case XSC_CMD_EVENT_RESP_TEMP_WARN:
+ xsc_core_warn(adapter->xdev, "[Minor]nic chip temperature high warning\n");
+ break;
+ case XSC_CMD_EVENT_RESP_OVER_TEMP_PROTECTION:
+ xsc_core_warn(adapter->xdev, "[Critical]nic chip was over-temperature\n");
+ break;
+ default:
+ xsc_core_info(adapter->xdev, "unknown event cmdtype=%04x\n",
+ out.ctx.resp_cmd_type);
+ break;
+ }
+
+failed:
+ return;
+}
+
+static void xsc_eth_event_handler(void *arg)
+{
+ struct xsc_adapter *adapter = (struct xsc_adapter *)arg;
+
+ queue_work(adapter->workq, &adapter->event_work);
+}
+
+static inline bool xsc_get_pct_drop_config(struct xsc_core_device *xdev)
+{
+ return (xdev->pdev->device == XSC_MC_PF_DEV_ID) ||
+ (xdev->pdev->device == XSC_MF_SOC_PF_DEV_ID) ||
+ (xdev->pdev->device == XSC_MS_PF_DEV_ID) ||
+ (xdev->pdev->device == XSC_MV_SOC_PF_DEV_ID);
+}
+
+static int xsc_eth_enable_nic_hca(struct xsc_adapter *adapter)
+{
+ struct xsc_core_device *xdev = adapter->xdev;
+ struct net_device *netdev = adapter->netdev;
+ struct xsc_cmd_enable_nic_hca_mbox_in in = {};
+ struct xsc_cmd_enable_nic_hca_mbox_out out = {};
+ u16 caps = 0;
+ u16 caps_mask = 0;
+ int err;
+
+ if (xsc_get_user_mode(xdev))
+ return 0;
+
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ENABLE_NIC_HCA);
+
+ in.rss.rss_en = 1;
+ in.rss.rqn_base = cpu_to_be16(adapter->channels.rqn_base -
+ xdev->caps.raweth_rss_qp_id_base);
+ in.rss.rqn_num = cpu_to_be16(adapter->channels.num_chl);
+ in.rss.hash_tmpl = cpu_to_be32(adapter->rss_param.rss_hash_tmpl);
+ in.rss.hfunc = xsc_hash_func_type(adapter->rss_param.hfunc);
+ caps_mask |= BIT(XSC_TBM_CAP_RSS);
+
+ if (netdev->features & NETIF_F_RXCSUM)
+ caps |= BIT(XSC_TBM_CAP_HASH_PPH);
+ caps_mask |= BIT(XSC_TBM_CAP_HASH_PPH);
+
+ if (xsc_get_pct_drop_config(xdev) && !(netdev->flags & IFF_SLAVE))
+ caps |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG);
+ caps_mask |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG);
+
+ memcpy(in.nic.mac_addr, netdev->dev_addr, ETH_ALEN);
+
+ in.nic.caps = cpu_to_be16(caps);
+ in.nic.caps_mask = cpu_to_be16(caps_mask);
+
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err || out.hdr.status) {
+ xsc_core_err(xdev, "failed!! err=%d, status=%d\n", err, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ xsc_core_info(xdev, "caps=0x%x, caps_mask=0x%x\n", caps, caps_mask);
+
+ return 0;
+}
+
+static int xsc_eth_disable_nic_hca(struct xsc_adapter *adapter)
+{
+ struct xsc_core_device *xdev = adapter->xdev;
+ struct net_device *netdev = adapter->netdev;
+ struct xsc_cmd_disable_nic_hca_mbox_in in = {};
+ struct xsc_cmd_disable_nic_hca_mbox_out out = {};
+ int err;
+ u16 caps = 0;
+
+ if (xsc_get_user_mode(xdev))
+ return 0;
+
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DISABLE_NIC_HCA);
+
+ if (xsc_get_pct_drop_config(xdev) && !(netdev->priv_flags & IFF_BONDING))
+ caps |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG);
+
+ in.nic.caps = cpu_to_be16(caps);
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err || out.hdr.status) {
+ xsc_core_err(xdev, "failed!! err=%d, status=%d\n", err, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ return 0;
+}
+
+static void xsc_set_default_xps_cpumasks(struct xsc_adapter *priv,
+ struct xsc_eth_params *params)
+{
+ struct xsc_core_device *xdev = priv->xdev;
+ int num_comp_vectors, irq;
+
+ num_comp_vectors = priv->nic_param.comp_vectors;
+ cpumask_clear(xdev->xps_cpumask);
+
+ for (irq = 0; irq < num_comp_vectors; irq++) {
+ cpumask_set_cpu(cpumask_local_spread(irq, xdev->priv.numa_node),
+ xdev->xps_cpumask);
+ netif_set_xps_queue(priv->netdev, xdev->xps_cpumask, irq);
+ }
+}
+
+static int xsc_set_port_admin_status(struct xsc_adapter *adapter,
+ enum xsc_port_status status)
+{
+ struct xsc_event_set_port_admin_status_mbox_in in;
+ struct xsc_event_set_port_admin_status_mbox_out out;
+ int ret = 0;
+
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_SET_PORT_ADMIN_STATUS);
+ in.admin_status = cpu_to_be16(status);
+
+ ret = xsc_cmd_exec(adapter->xdev, &in, sizeof(in), &out, sizeof(out));
+ if (ret || out.hdr.status) {
+ xsc_core_err(adapter->xdev, "failed to set port admin status, err=%d, status=%d\n",
+ ret, out.hdr.status);
+ return -ENOEXEC;
+ }
+
+ return ret;
+}
+
+static int xsc_eth_open(struct net_device *netdev)
+{
+ struct xsc_adapter *adapter = netdev_priv(netdev);
+ struct xsc_core_device *xdev = adapter->xdev;
+ int ret = XSCALE_RET_SUCCESS;
+
+ mutex_lock(&adapter->status_lock);
+ if (adapter->status == XSCALE_ETH_DRIVER_OK) {
+ xsc_core_warn(adapter->xdev, "unnormal ndo_open when status=%d\n",
+ adapter->status);
+ goto ret;
+ }
+
+ ret = xsc_eth_sw_init(adapter);
+ if (ret)
+ goto ret;
+
+ ret = xsc_eth_enable_nic_hca(adapter);
+ if (ret)
+ goto sw_deinit;
+
+ /*INIT_WORK*/
+ INIT_WORK(&adapter->event_work, xsc_eth_event_work);
+ xdev->event_handler = xsc_eth_event_handler;
+
+ if (xsc_eth_get_link_status(adapter)) {
+ netdev_info(netdev, "Link up\n");
+ netif_carrier_on(adapter->netdev);
+ } else {
+ netdev_info(netdev, "Link down\n");
+ }
+
+ adapter->status = XSCALE_ETH_DRIVER_OK;
+
+ xsc_set_default_xps_cpumasks(adapter, &adapter->nic_param);
+
+ xsc_set_port_admin_status(adapter, XSC_PORT_UP);
+
+ goto ret;
+
+sw_deinit:
+ xsc_eth_sw_deinit(adapter);
+
+ret:
+ mutex_unlock(&adapter->status_lock);
+ xsc_core_info(xdev, "open %s %s, ret=%d\n",
+ netdev->name, ret ? "failed" : "ok", ret);
+ if (ret)
+ return XSCALE_RET_ERROR;
+ else
+ return XSCALE_RET_SUCCESS;
+}
+
+static int xsc_eth_close(struct net_device *netdev)
+{
+ struct xsc_adapter *adapter = netdev_priv(netdev);
+ int ret = 0;
+
+ mutex_lock(&adapter->status_lock);
+
+ if (!netif_device_present(netdev)) {
+ ret = -ENODEV;
+ goto ret;
+ }
+
+ if (adapter->status != XSCALE_ETH_DRIVER_OK)
+ goto ret;
+
+ adapter->status = XSCALE_ETH_DRIVER_CLOSE;
+
+ netif_carrier_off(adapter->netdev);
+
+ xsc_eth_sw_deinit(adapter);
+
+ ret = xsc_eth_disable_nic_hca(adapter);
+ if (ret)
+ xsc_core_warn(adapter->xdev, "failed to disable nic hca, err=%d\n", ret);
+
+ xsc_set_port_admin_status(adapter, XSC_PORT_DOWN);
+
+ret:
+ mutex_unlock(&adapter->status_lock);
+ xsc_core_info(adapter->xdev, "close device %s %s, ret=%d\n",
+ adapter->netdev->name, ret ? "failed" : "ok", ret);
+
+ return ret;
+}
+
static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz)
{
struct xsc_set_mtu_mbox_in in;
@@ -159,7 +1660,8 @@ static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_
}
static const struct net_device_ops xsc_netdev_ops = {
- // TBD
+ .ndo_open = xsc_eth_open,
+ .ndo_stop = xsc_eth_close,
};
static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter)
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
index cf16d98cb..326f99520 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
@@ -9,6 +9,12 @@
#include "common/xsc_device.h"
#include "xsc_eth_common.h"
+#define XSC_INVALID_LKEY 0x100
+
+#define XSCALE_DRIVER_NAME "xsc_eth"
+#define XSCALE_RET_SUCCESS 0
+#define XSCALE_RET_ERROR 1
+
enum {
XSCALE_ETH_DRIVER_INIT,
XSCALE_ETH_DRIVER_OK,
@@ -34,7 +40,9 @@ struct xsc_adapter {
struct xsc_rss_params rss_param;
struct workqueue_struct *workq;
+ struct work_struct event_work;
+ struct xsc_eth_channels channels;
struct xsc_sq **txq2sq;
u32 status;
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
index 9a878cfb7..d35791a31 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
@@ -6,6 +6,7 @@
#ifndef XSC_ETH_COMMON_H
#define XSC_ETH_COMMON_H
+#include "xsc_queue.h"
#include "xsc_pph.h"
#define SW_MIN_MTU ETH_MIN_MTU
@@ -20,12 +21,130 @@
#define XSC_ETH_RX_MAX_HEAD_ROOM 256
#define XSC_SW2HW_RX_PKT_LEN(mtu) ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM)
+#define XSC_QPN_SQN_STUB 1025
+#define XSC_QPN_RQN_STUB 1024
+
#define XSC_LOG_INDIR_RQT_SIZE 0x8
#define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE)
#define XSC_ETH_MIN_NUM_CHANNELS 2
#define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE
+#define XSC_TX_NUM_TC 1
+#define XSC_MAX_NUM_TC 8
+#define XSC_ETH_MAX_TC_TOTAL (XSC_ETH_MAX_NUM_CHANNELS * XSC_MAX_NUM_TC)
+#define XSC_ETH_MAX_QP_NUM_PER_CH (XSC_MAX_NUM_TC + 1)
+
+#define XSC_SKB_FRAG_SZ(len) (SKB_DATA_ALIGN(len) + \
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+
+#define XSC_RQCQ_ELE_SZ 32 //size of a rqcq entry
+#define XSC_SQCQ_ELE_SZ 32 //size of a sqcq entry
+#define XSC_RQ_ELE_SZ XSC_RECV_WQE_BB
+#define XSC_SQ_ELE_SZ XSC_SEND_WQE_BB
+#define XSC_EQ_ELE_SZ 8 //size of a eq entry
+
+#define XSC_SKB_FRAG_SZ(len) (SKB_DATA_ALIGN(len) + \
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+#define XSC_MIN_SKB_FRAG_SZ (XSC_SKB_FRAG_SZ(XSC_RX_HEADROOM))
+#define XSC_LOG_MAX_RX_WQE_BULK \
+ (ilog2(PAGE_SIZE / roundup_pow_of_two(XSC_MIN_SKB_FRAG_SZ)))
+
+#define XSC_MIN_LOG_RQ_SZ (1 + XSC_LOG_MAX_RX_WQE_BULK)
+#define XSC_DEF_LOG_RQ_SZ 0xa
+#define XSC_MAX_LOG_RQ_SZ 0xd
+
+#define XSC_MIN_LOG_SQ_SZ 0x6
+#define XSC_DEF_LOG_SQ_SZ 0xa
+#define XSC_MAX_LOG_SQ_SZ 0xd
+
+#define XSC_SQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_SQ_SZ)
+#define XSC_RQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_RQ_SZ)
+
+#define XSC_SQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_SQ_SZ)
+#define XSC_RQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_RQ_SZ)
+
+#define XSC_LOG_RQCQ_SZ 0xb
+#define XSC_LOG_SQCQ_SZ 0xa
+
+#define XSC_RQCQ_ELE_NUM BIT(XSC_LOG_RQCQ_SZ)
+#define XSC_SQCQ_ELE_NUM BIT(XSC_LOG_SQCQ_SZ)
+#define XSC_RQ_ELE_NUM XSC_RQ_ELE_NUM_DEF //ds number of a wqebb
+#define XSC_SQ_ELE_NUM XSC_SQ_ELE_NUM_DEF //DS number
+#define XSC_EQ_ELE_NUM XSC_SQ_ELE_NUM_DEF //number of eq entry???
+
+enum xsc_port_status {
+ XSC_PORT_DOWN = 0,
+ XSC_PORT_UP = 1,
+};
+
+enum xsc_queue_type {
+ XSC_QUEUE_TYPE_EQ = 0,
+ XSC_QUEUE_TYPE_RQCQ,
+ XSC_QUEUE_TYPE_SQCQ,
+ XSC_QUEUE_TYPE_RQ,
+ XSC_QUEUE_TYPE_SQ,
+ XSC_QUEUE_TYPE_MAX,
+};
+
+struct xsc_queue_attr {
+ u8 q_type;
+ u32 ele_num;
+ u32 ele_size;
+ u8 ele_log_size;
+ u8 q_log_size;
+};
+
+struct xsc_eth_rx_wqe_cyc {
+ DECLARE_FLEX_ARRAY(struct xsc_wqe_data_seg, data);
+};
+
+struct xsc_eq_param {
+ struct xsc_queue_attr eq_attr;
+};
+
+struct xsc_cq_param {
+ struct xsc_wq_param wq;
+ struct cq_cmd {
+ u8 abc[16];
+ } cqc;
+ struct xsc_queue_attr cq_attr;
+};
+
+struct xsc_rq_param {
+ struct xsc_wq_param wq;
+ struct xsc_queue_attr rq_attr;
+ struct xsc_rq_frags_info frags_info;
+};
+
+struct xsc_sq_param {
+ struct xsc_wq_param wq;
+ struct xsc_queue_attr sq_attr;
+};
+
+struct xsc_qp_param {
+ struct xsc_queue_attr qp_attr;
+};
+
+struct xsc_channel_param {
+ struct xsc_cq_param rqcq_param;
+ struct xsc_cq_param sqcq_param;
+ struct xsc_rq_param rq_param;
+ struct xsc_sq_param sq_param;
+ struct xsc_qp_param qp_param;
+};
+
+struct xsc_eth_qp {
+ u16 rq_num;
+ u16 sq_num;
+ struct xsc_rq rq[XSC_MAX_NUM_TC]; /*may be use one only*/
+ struct xsc_sq sq[XSC_MAX_NUM_TC]; /*reserved to tc*/
+};
+
+enum channel_flags {
+ XSC_CHANNEL_NAPI_SCHED = 1,
+};
+
struct xsc_eth_params {
u16 num_channels;
u16 max_num_ch;
@@ -57,4 +176,28 @@ struct xsc_eth_params {
u32 pflags;
};
+struct xsc_channel {
+ /* data path */
+ struct xsc_eth_qp qp;
+ struct napi_struct napi;
+ u8 num_tc;
+ int chl_idx;
+
+ /*relationship*/
+ struct xsc_adapter *adapter;
+ struct net_device *netdev;
+ int cpu;
+ unsigned long flags;
+
+ /* data path - accessed per napi poll */
+ const struct cpumask *aff_mask;
+ struct irq_desc *irq_desc;
+} ____cacheline_aligned_in_smp;
+
+struct xsc_eth_channels {
+ struct xsc_channel *c;
+ unsigned int num_chl;
+ u32 rqn_base;
+};
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
new file mode 100644
index 000000000..73af472cf
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/*
+ * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. All
+ * rights reserved.
+ * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "xsc_eth_txrx.h"
+
+int xsc_poll_rx_cq(struct xsc_cq *cq, int budget)
+{
+ // TBD
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c
new file mode 100644
index 000000000..2dd4aa3cb
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include "xsc_eth_common.h"
+#include "xsc_eth_txrx.h"
+
+void xsc_cq_notify_hw_rearm(struct xsc_cq *cq)
+{
+ union xsc_cq_doorbell db;
+
+ db.val = 0;
+ db.cq_next_cid = cpu_to_le32(cq->wq.cc);
+ db.cq_id = cpu_to_le32(cq->xcq.cqn);
+ db.arm = 0;
+
+ /* ensure doorbell record is visible to device before ringing the doorbell */
+ wmb();
+ writel(db.val, REG_ADDR(cq->xdev, cq->xdev->regs.complete_db));
+}
+
+void xsc_cq_notify_hw(struct xsc_cq *cq)
+{
+ struct xsc_core_device *xdev = cq->xdev;
+ union xsc_cq_doorbell db;
+
+ dma_wmb();
+
+ db.val = 0;
+ db.cq_next_cid = cpu_to_le32(cq->wq.cc);
+ db.cq_id = cpu_to_le32(cq->xcq.cqn);
+
+ writel(db.val, REG_ADDR(xdev, xdev->regs.complete_reg));
+}
+
+static inline bool xsc_channel_no_affinity_change(struct xsc_channel *c)
+{
+ int current_cpu = smp_processor_id();
+
+ return cpumask_test_cpu(current_cpu, c->aff_mask);
+}
+
+static bool xsc_poll_tx_cq(struct xsc_cq *cq, int napi_budget)
+{
+ // TBD
+ return true;
+}
+
+int xsc_eth_napi_poll(struct napi_struct *napi, int budget)
+{
+ struct xsc_channel *c = container_of(napi, struct xsc_channel, napi);
+ struct xsc_eth_params *params = &c->adapter->nic_param;
+ struct xsc_rq *rq = &c->qp.rq[0];
+ struct xsc_sq *sq = NULL;
+ bool busy = false;
+ int work_done = 0;
+ int tx_budget = 0;
+ int i;
+
+ rcu_read_lock();
+
+ clear_bit(XSC_CHANNEL_NAPI_SCHED, &c->flags);
+
+ tx_budget = params->sq_size >> 2;
+ for (i = 0; i < c->num_tc; i++)
+ busy |= xsc_poll_tx_cq(&c->qp.sq[i].cq, tx_budget);
+
+ /* budget=0 means: don't poll rx rings */
+ if (likely(budget)) {
+ work_done = xsc_poll_rx_cq(&rq->cq, budget);
+ busy |= work_done == budget;
+ }
+
+ busy |= rq->post_wqes(rq);
+
+ if (busy) {
+ if (likely(xsc_channel_no_affinity_change(c))) {
+ rcu_read_unlock();
+ return budget;
+ }
+ if (budget && work_done == budget)
+ work_done--;
+ }
+
+ if (unlikely(!napi_complete_done(napi, work_done)))
+ goto out;
+
+ for (i = 0; i < c->num_tc; i++) {
+ sq = &c->qp.sq[i];
+ xsc_cq_notify_hw_rearm(&sq->cq);
+ }
+
+ xsc_cq_notify_hw_rearm(&rq->cq);
+out:
+ rcu_read_unlock();
+ return work_done;
+}
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
new file mode 100644
index 000000000..18cafd410
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_RXTX_H
+#define XSC_RXTX_H
+
+#include "xsc_eth.h"
+
+void xsc_cq_notify_hw_rearm(struct xsc_cq *cq);
+void xsc_cq_notify_hw(struct xsc_cq *cq);
+int xsc_eth_napi_poll(struct napi_struct *napi, int budget);
+bool xsc_eth_post_rx_wqes(struct xsc_rq *rq);
+void xsc_eth_handle_rx_cqe(struct xsc_cqwq *cqwq,
+ struct xsc_rq *rq, struct xsc_cqe *cqe);
+void xsc_eth_dealloc_rx_wqe(struct xsc_rq *rq, u16 ix);
+struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq,
+ struct xsc_wqe_frag_info *wi,
+ u32 cqe_bcnt, u8 has_pph);
+struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq,
+ struct xsc_wqe_frag_info *wi,
+ u32 cqe_bcnt, u8 has_pph);
+int xsc_poll_rx_cq(struct xsc_cq *cq, int budget);
+
+#endif /* XSC_RXTX_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
index ba2601361..89f710a52 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
@@ -35,7 +35,152 @@
#ifndef XSC_QUEUE_H
#define XSC_QUEUE_H
+#include <net/page_pool/types.h>
+#include <net/page_pool/helpers.h>
#include "common/xsc_core.h"
+#include "xsc_eth_wq.h"
+
+enum {
+ XSC_SEND_WQE_DS = 16,
+ XSC_SEND_WQE_BB = 64,
+};
+
+enum {
+ XSC_RECV_WQE_DS = 16,
+ XSC_RECV_WQE_BB = 16,
+};
+
+enum {
+ XSC_ETH_RQ_STATE_ENABLED,
+ XSC_ETH_RQ_STATE_AM,
+ XSC_ETH_RQ_STATE_CACHE_REDUCE_PENDING,
+};
+
+#define XSC_SEND_WQEBB_NUM_DS (XSC_SEND_WQE_BB / XSC_SEND_WQE_DS)
+#define XSC_LOG_SEND_WQEBB_NUM_DS ilog2(XSC_SEND_WQEBB_NUM_DS)
+
+#define XSC_RECV_WQEBB_NUM_DS (XSC_RECV_WQE_BB / XSC_RECV_WQE_DS)
+#define XSC_LOG_RECV_WQEBB_NUM_DS ilog2(XSC_RECV_WQEBB_NUM_DS)
+
+/* each ds holds one fragment in skb */
+#define XSC_MAX_RX_FRAGS 4
+#define XSC_RX_FRAG_SZ_ORDER 0
+#define XSC_RX_FRAG_SZ (PAGE_SIZE << XSC_RX_FRAG_SZ_ORDER)
+#define DEFAULT_FRAG_SIZE (2048)
+
+enum {
+ XSC_ETH_SQ_STATE_ENABLED,
+ XSC_ETH_SQ_STATE_AM,
+};
+
+struct xsc_dma_info {
+ struct page *page;
+ dma_addr_t addr;
+};
+
+struct xsc_page_cache {
+ struct xsc_dma_info *page_cache;
+ u32 head;
+ u32 tail;
+ u32 sz;
+ u32 resv;
+};
+
+struct xsc_cq {
+ /* data path - accessed per cqe */
+ struct xsc_cqwq wq;
+
+ /* data path - accessed per napi poll */
+ u16 event_ctr;
+ struct napi_struct *napi;
+ struct xsc_core_cq xcq;
+ struct xsc_channel *channel;
+
+ /* control */
+ struct xsc_core_device *xdev;
+ struct xsc_wq_ctrl wq_ctrl;
+ u8 rx;
+} ____cacheline_aligned_in_smp;
+
+struct xsc_wqe_frag_info {
+ struct xsc_dma_info *di;
+ u32 offset;
+ u8 last_in_page;
+ u8 is_available;
+};
+
+struct xsc_rq_frag_info {
+ int frag_size;
+ int frag_stride;
+};
+
+struct xsc_rq_frags_info {
+ struct xsc_rq_frag_info arr[XSC_MAX_RX_FRAGS];
+ u8 num_frags;
+ u8 log_num_frags;
+ u8 wqe_bulk;
+ u8 wqe_bulk_min;
+ u8 frags_max_num;
+};
+
+struct xsc_rq;
+typedef void (*xsc_fp_handle_rx_cqe)(struct xsc_cqwq *cqwq, struct xsc_rq *rq,
+ struct xsc_cqe *cqe);
+typedef bool (*xsc_fp_post_rx_wqes)(struct xsc_rq *rq);
+typedef void (*xsc_fp_dealloc_wqe)(struct xsc_rq *rq, u16 ix);
+typedef struct sk_buff * (*xsc_fp_skb_from_cqe)(struct xsc_rq *rq,
+ struct xsc_wqe_frag_info *wi, u32 cqe_bcnt, u8 has_pph);
+
+struct xsc_rq {
+ struct xsc_core_qp cqp;
+ struct {
+ struct xsc_wq_cyc wq;
+ struct xsc_wqe_frag_info *frags;
+ struct xsc_dma_info *di;
+ struct xsc_rq_frags_info info;
+ xsc_fp_skb_from_cqe skb_from_cqe;
+ } wqe;
+
+ struct {
+ u16 headroom;
+ u8 map_dir; /* dma map direction */
+ } buff;
+
+ struct page_pool *page_pool;
+ struct xsc_wq_ctrl wq_ctrl;
+ struct xsc_cq cq;
+ u32 rqn;
+ int ix;
+
+ unsigned long state;
+ struct work_struct recover_work;
+
+ u32 hw_mtu;
+ u32 frags_sz;
+
+ xsc_fp_handle_rx_cqe handle_rx_cqe;
+ xsc_fp_post_rx_wqes post_wqes;
+ xsc_fp_dealloc_wqe dealloc_wqe;
+ struct xsc_page_cache page_cache;
+} ____cacheline_aligned_in_smp;
+
+enum xsc_dma_map_type {
+ XSC_DMA_MAP_SINGLE,
+ XSC_DMA_MAP_PAGE
+};
+
+struct xsc_sq_dma {
+ dma_addr_t addr;
+ u32 size;
+ enum xsc_dma_map_type type;
+};
+
+struct xsc_tx_wqe_info {
+ struct sk_buff *skb;
+ u32 num_bytes;
+ u8 num_wqebbs;
+ u8 num_dma;
+};
struct xsc_sq {
struct xsc_core_qp cqp;
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
index 0f4b17dfa..01f5e911f 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
@@ -6,5 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
-xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o intf.o
+xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o intf.o vport.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c b/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c
new file mode 100644
index 000000000..8200f6c91
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021 - 2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include "common/xsc_core.h"
+#include "common/xsc_driver.h"
+
+u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport)
+{
+ struct xsc_query_vport_state_in in;
+ struct xsc_query_vport_state_out out;
+ int err;
+
+ memset(&in, 0, sizeof(in));
+ memset(&out, 0, sizeof(out));
+
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_VPORT_STATE);
+ in.vport_number = cpu_to_be16(vport);
+ if (vport)
+ in.other_vport = 1;
+
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err || out.hdr.status)
+ xsc_core_err(xdev, "failed to query vport state, err=%d, status=%d\n",
+ err, out.hdr.status);
+
+ return out.state;
+}
+EXPORT_SYMBOL(xsc_core_query_vport_state);
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 12/16] net-next/yunsilicon: Add ndo_start_xmit
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (10 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 11/16] net-next/yunsilicon: ndo_open and ndo_stop Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 13/16] net-next/yunsilicon: Add eth rx Xin Tian
` (4 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add ndo_start_xmit
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +-
.../net/ethernet/yunsilicon/xsc/net/main.c | 1 +
.../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 1 +
.../yunsilicon/xsc/net/xsc_eth_common.h | 8 +
.../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 1 -
.../ethernet/yunsilicon/xsc/net/xsc_eth_tx.c | 305 ++++++++++++++++++
.../yunsilicon/xsc/net/xsc_eth_txrx.h | 36 +++
.../ethernet/yunsilicon/xsc/net/xsc_queue.h | 7 +
8 files changed, 359 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
index 104ef5330..7cfc2aaa2 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
@@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o
-xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_rx.o
+xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
index dd2f99537..338e308a6 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
@@ -1662,6 +1662,7 @@ static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_
static const struct net_device_ops xsc_netdev_ops = {
.ndo_open = xsc_eth_open,
.ndo_stop = xsc_eth_close,
+ .ndo_start_xmit = xsc_eth_xmit_start,
};
static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter)
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
index 326f99520..3e5614eb1 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
@@ -6,6 +6,7 @@
#ifndef XSC_ETH_H
#define XSC_ETH_H
+#include <linux/udp.h>
#include "common/xsc_device.h"
#include "xsc_eth_common.h"
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
index d35791a31..8b8398cfb 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
@@ -200,4 +200,12 @@ struct xsc_eth_channels {
u32 rqn_base;
};
+union xsc_send_doorbell {
+ struct{
+ s32 next_pid : 16;
+ u32 qp_num : 15;
+ };
+ u32 send_data;
+};
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
index 73af472cf..286e5f9eb 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
@@ -40,4 +40,3 @@ int xsc_poll_rx_cq(struct xsc_cq *cq, int budget)
// TBD
return 0;
}
-
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c
new file mode 100644
index 000000000..67e68485e
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c
@@ -0,0 +1,305 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include <linux/tcp.h>
+#include "xsc_eth.h"
+#include "xsc_eth_txrx.h"
+
+#define XSC_OPCODE_RAW 7
+
+static inline void xsc_dma_push(struct xsc_sq *sq, dma_addr_t addr, u32 size,
+ enum xsc_dma_map_type map_type)
+{
+ struct xsc_sq_dma *dma = xsc_dma_get(sq, sq->dma_fifo_pc++);
+
+ dma->addr = addr;
+ dma->size = size;
+ dma->type = map_type;
+}
+
+static void xsc_dma_unmap_wqe_err(struct xsc_sq *sq, u8 num_dma)
+{
+ struct xsc_adapter *adapter = sq->channel->adapter;
+ struct device *dev = adapter->dev;
+
+ int i;
+
+ for (i = 0; i < num_dma; i++) {
+ struct xsc_sq_dma *last_pushed_dma = xsc_dma_get(sq, --sq->dma_fifo_pc);
+
+ xsc_tx_dma_unmap(dev, last_pushed_dma);
+ }
+}
+
+static inline void *xsc_sq_fetch_wqe(struct xsc_sq *sq, size_t size, u16 *pi)
+{
+ struct xsc_wq_cyc *wq = &sq->wq;
+ void *wqe;
+
+ /*caution, sp->pc is default to be zero*/
+ *pi = xsc_wq_cyc_ctr2ix(wq, sq->pc);
+ wqe = xsc_wq_cyc_get_wqe(wq, *pi);
+ memset(wqe, 0, size);
+
+ return wqe;
+}
+
+static u16 xsc_tx_get_gso_ihs(struct xsc_sq *sq, struct sk_buff *skb)
+{
+ u16 ihs;
+
+ if (skb->encapsulation) {
+ ihs = skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb);
+ } else {
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4)
+ ihs = skb_transport_offset(skb) + sizeof(struct udphdr);
+ else
+ ihs = skb_transport_offset(skb) + tcp_hdrlen(skb);
+ }
+
+ return ihs;
+}
+
+static void xsc_txwqe_build_cseg_csum(struct xsc_sq *sq,
+ struct sk_buff *skb,
+ struct xsc_send_wqe_ctrl_seg *cseg)
+{
+ if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ if (skb->encapsulation)
+ cseg->csum_en = XSC_ETH_WQE_INNER_AND_OUTER_CSUM;
+ else
+ cseg->csum_en = XSC_ETH_WQE_OUTER_CSUM;
+ } else {
+ cseg->csum_en = XSC_ETH_WQE_NONE_CSUM;
+ }
+}
+
+static void xsc_txwqe_build_csegs(struct xsc_sq *sq, struct sk_buff *skb,
+ u16 mss, u16 ihs, u16 headlen,
+ u8 opcode, u16 ds_cnt, u32 num_bytes,
+ struct xsc_send_wqe_ctrl_seg *cseg)
+{
+ struct xsc_core_device *xdev = sq->cq.xdev;
+ int send_wqe_ds_num_log = ilog2(xdev->caps.send_ds_num);
+
+ xsc_txwqe_build_cseg_csum(sq, skb, cseg);
+
+ if (mss != 0) {
+ cseg->has_pph = 0;
+ cseg->so_type = 1;
+ cseg->so_hdr_len = ihs;
+ cseg->so_data_size = cpu_to_le16(mss);
+ }
+
+ cseg->msg_opcode = opcode;
+ cseg->wqe_id = cpu_to_le16(sq->pc << send_wqe_ds_num_log);
+ cseg->ds_data_num = ds_cnt - XSC_SEND_WQEBB_CTRL_NUM_DS;
+ cseg->msg_len = cpu_to_le32(num_bytes);
+
+ cseg->ce = 1;
+}
+
+static int xsc_txwqe_build_dsegs(struct xsc_sq *sq, struct sk_buff *skb,
+ u16 ihs, u16 headlen,
+ struct xsc_wqe_data_seg *dseg)
+{
+ dma_addr_t dma_addr = 0;
+ u8 num_dma = 0;
+ int i;
+ struct xsc_adapter *adapter = sq->channel->adapter;
+ struct device *dev = adapter->dev;
+
+ if (headlen) {
+ dma_addr = dma_map_single(dev, skb->data, headlen, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, dma_addr)))
+ goto dma_unmap_wqe_err;
+
+ dseg->va = cpu_to_le64(dma_addr);
+ dseg->mkey = cpu_to_le32(sq->mkey_be);
+ dseg->seg_len = cpu_to_le32(headlen);
+
+ xsc_dma_push(sq, dma_addr, headlen, XSC_DMA_MAP_SINGLE);
+ num_dma++;
+ dseg++;
+ }
+
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ int fsz = skb_frag_size(frag);
+
+ dma_addr = skb_frag_dma_map(dev, frag, 0, fsz, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, dma_addr)))
+ goto dma_unmap_wqe_err;
+
+ dseg->va = cpu_to_le64(dma_addr);
+ dseg->mkey = cpu_to_le32(sq->mkey_be);
+ dseg->seg_len = cpu_to_le32(fsz);
+
+ xsc_dma_push(sq, dma_addr, fsz, XSC_DMA_MAP_PAGE);
+ num_dma++;
+ dseg++;
+ }
+
+ return num_dma;
+
+dma_unmap_wqe_err:
+ xsc_dma_unmap_wqe_err(sq, num_dma);
+ return -ENOMEM;
+}
+
+static inline void xsc_sq_notify_hw(struct xsc_wq_cyc *wq, u16 pc,
+ struct xsc_sq *sq)
+{
+ struct xsc_adapter *adapter = sq->channel->adapter;
+ struct xsc_core_device *xdev = adapter->xdev;
+ union xsc_send_doorbell doorbell_value;
+ int send_ds_num_log = ilog2(xdev->caps.send_ds_num);
+
+ /*reverse wqe index to ds index*/
+ doorbell_value.next_pid = pc << send_ds_num_log;
+ doorbell_value.qp_num = sq->sqn;
+
+ /* Make sure that descriptors are written before
+ * updating doorbell record and ringing the doorbell
+ */
+ wmb();
+ writel(doorbell_value.send_data, REG_ADDR(xdev, xdev->regs.tx_db));
+}
+
+static void xsc_txwqe_complete(struct xsc_sq *sq, struct sk_buff *skb,
+ u8 opcode, u16 ds_cnt, u8 num_wqebbs, u32 num_bytes, u8 num_dma,
+ struct xsc_tx_wqe_info *wi)
+{
+ struct xsc_wq_cyc *wq = &sq->wq;
+
+ wi->num_bytes = num_bytes;
+ wi->num_dma = num_dma;
+ wi->num_wqebbs = num_wqebbs;
+ wi->skb = skb;
+
+ netdev_tx_sent_queue(sq->txq, num_bytes);
+
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+
+ sq->pc += wi->num_wqebbs;
+
+ if (unlikely(!xsc_wqc_has_room_for(wq, sq->cc, sq->pc, sq->stop_room)))
+ netif_tx_stop_queue(sq->txq);
+
+ if (!netdev_xmit_more() || netif_xmit_stopped(sq->txq))
+ xsc_sq_notify_hw(wq, sq->pc, sq);
+}
+
+static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb,
+ struct xsc_sq *sq,
+ struct xsc_tx_wqe *wqe,
+ u16 pi)
+{
+ struct xsc_send_wqe_ctrl_seg *cseg;
+ struct xsc_wqe_data_seg *dseg;
+ struct xsc_tx_wqe_info *wi;
+ struct xsc_core_device *xdev = sq->cq.xdev;
+ u16 ds_cnt;
+ u16 mss, ihs, headlen;
+ u8 opcode;
+ u32 num_bytes, num_dma;
+ u8 num_wqebbs;
+
+retry_send:
+ /* Calc ihs and ds cnt, no writes to wqe yet */
+ /*ctrl-ds, it would be reduce in ds_data_num*/
+ ds_cnt = XSC_SEND_WQEBB_CTRL_NUM_DS;
+
+ /*in andes inline is bonding with gso*/
+ if (skb_is_gso(skb)) {
+ opcode = XSC_OPCODE_RAW;
+ mss = skb_shinfo(skb)->gso_size;
+ ihs = xsc_tx_get_gso_ihs(sq, skb);
+ num_bytes = skb->len;
+ } else {
+ opcode = XSC_OPCODE_RAW;
+ mss = 0;
+ ihs = 0;
+ num_bytes = skb->len;
+ }
+
+ /*linear data in skb*/
+ headlen = skb->len - skb->data_len;
+ ds_cnt += !!headlen;
+ ds_cnt += skb_shinfo(skb)->nr_frags;
+
+ /* Check packet size. */
+ if (unlikely(mss == 0 && num_bytes > sq->hw_mtu))
+ goto err_drop;
+
+ num_wqebbs = DIV_ROUND_UP(ds_cnt, xdev->caps.send_ds_num);
+ /*if ds_cnt exceed one wqe, drop it*/
+ if (num_wqebbs != 1) {
+ if (skb_linearize(skb))
+ goto err_drop;
+ goto retry_send;
+ }
+
+ /* fill wqe */
+ wi = (struct xsc_tx_wqe_info *)&sq->db.wqe_info[pi];
+ cseg = &wqe->ctrl;
+ dseg = &wqe->data[0];
+
+ if (unlikely(num_bytes == 0))
+ goto err_drop;
+
+ xsc_txwqe_build_csegs(sq, skb, mss, ihs, headlen,
+ opcode, ds_cnt, num_bytes, cseg);
+
+ /*inline header is also use dma to transport*/
+ num_dma = xsc_txwqe_build_dsegs(sq, skb, ihs, headlen, dseg);
+ if (unlikely(num_dma < 0))
+ goto err_drop;
+
+ xsc_txwqe_complete(sq, skb, opcode, ds_cnt, num_wqebbs, num_bytes,
+ num_dma, wi);
+
+ return NETDEV_TX_OK;
+
+err_drop:
+ dev_kfree_skb_any(skb);
+
+ return NETDEV_TX_OK;
+}
+
+netdev_tx_t xsc_eth_xmit_start(struct sk_buff *skb, struct net_device *netdev)
+{
+ u32 ret;
+ u32 queue_id;
+ struct xsc_sq *sq;
+ struct xsc_tx_wqe *wqe;
+ u16 pi;
+ struct xsc_adapter *adapter = netdev_priv(netdev);
+ struct xsc_core_device *xdev = adapter->xdev;
+
+ if (!skb)
+ return NETDEV_TX_OK;
+
+ if (!adapter)
+ return NETDEV_TX_BUSY;
+
+ if (adapter->status != XSCALE_ETH_DRIVER_OK)
+ return NETDEV_TX_BUSY;
+
+ queue_id = skb_get_queue_mapping(skb);
+ assert(adapter->xdev, queue_id < XSC_ETH_MAX_TC_TOTAL);
+
+ sq = adapter->txq2sq[queue_id];
+ if (!sq)
+ return NETDEV_TX_BUSY;
+
+ wqe = xsc_sq_fetch_wqe(sq, xdev->caps.send_ds_num * XSC_SEND_WQE_DS, &pi);
+ assert(adapter->xdev, wqe);
+
+ ret = xsc_eth_xmit_frame(skb, sq, wqe, pi);
+
+ return ret;
+}
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
index 18cafd410..bc5932812 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
@@ -6,8 +6,17 @@
#ifndef XSC_RXTX_H
#define XSC_RXTX_H
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
#include "xsc_eth.h"
+enum {
+ XSC_ETH_WQE_NONE_CSUM,
+ XSC_ETH_WQE_INNER_CSUM,
+ XSC_ETH_WQE_OUTER_CSUM,
+ XSC_ETH_WQE_INNER_AND_OUTER_CSUM,
+};
+
void xsc_cq_notify_hw_rearm(struct xsc_cq *cq);
void xsc_cq_notify_hw(struct xsc_cq *cq);
int xsc_eth_napi_poll(struct napi_struct *napi, int budget);
@@ -23,4 +32,31 @@ struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq,
u32 cqe_bcnt, u8 has_pph);
int xsc_poll_rx_cq(struct xsc_cq *cq, int budget);
+netdev_tx_t xsc_eth_xmit_start(struct sk_buff *skb, struct net_device *netdev);
+
+static inline void xsc_tx_dma_unmap(struct device *dev, struct xsc_sq_dma *dma)
+{
+ switch (dma->type) {
+ case XSC_DMA_MAP_SINGLE:
+ dma_unmap_single(dev, dma->addr, dma->size, DMA_TO_DEVICE);
+ break;
+ case XSC_DMA_MAP_PAGE:
+ dma_unmap_page(dev, dma->addr, dma->size, DMA_TO_DEVICE);
+ break;
+ default:
+ break;
+ }
+}
+
+static inline struct xsc_sq_dma *xsc_dma_get(struct xsc_sq *sq, u32 i)
+{
+ return &sq->db.dma_fifo[i & sq->dma_fifo_mask];
+}
+
+static inline bool xsc_wqc_has_room_for(struct xsc_wq_cyc *wq,
+ u16 cc, u16 pc, u16 n)
+{
+ return (xsc_wq_cyc_ctr2ix(wq, cc - pc) >= n) || (cc == pc);
+}
+
#endif /* XSC_RXTX_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
index 89f710a52..a6f0ec807 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
@@ -62,6 +62,8 @@ enum {
#define XSC_RECV_WQEBB_NUM_DS (XSC_RECV_WQE_BB / XSC_RECV_WQE_DS)
#define XSC_LOG_RECV_WQEBB_NUM_DS ilog2(XSC_RECV_WQEBB_NUM_DS)
+#define XSC_SEND_WQEBB_CTRL_NUM_DS 1
+
/* each ds holds one fragment in skb */
#define XSC_MAX_RX_FRAGS 4
#define XSC_RX_FRAG_SZ_ORDER 0
@@ -182,6 +184,11 @@ struct xsc_tx_wqe_info {
u8 num_dma;
};
+struct xsc_tx_wqe {
+ struct xsc_send_wqe_ctrl_seg ctrl;
+ struct xsc_wqe_data_seg data[];
+};
+
struct xsc_sq {
struct xsc_core_qp cqp;
/* dirtied @completion */
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 13/16] net-next/yunsilicon: Add eth rx
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (11 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 12/16] net-next/yunsilicon: Add ndo_start_xmit Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 19:47 ` Andrew Lunn
2024-12-18 10:50 ` [PATCH v1 14/16] net-next/yunsilicon: add ndo_get_stats64 Xin Tian
` (3 subsequent siblings)
16 siblings, 1 reply; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add eth rx
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 9 +
.../yunsilicon/xsc/net/xsc_eth_common.h | 28 +
.../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 564 +++++++++++++++++-
.../yunsilicon/xsc/net/xsc_eth_txrx.c | 90 ++-
.../yunsilicon/xsc/net/xsc_eth_txrx.h | 28 +
5 files changed, 716 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index 417cb021c..d69be5352 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -227,6 +227,10 @@ struct xsc_qp_table {
};
// cq
+enum {
+ XSC_CQE_OWNER_MASK = 1,
+};
+
enum xsc_event {
XSC_EVENT_TYPE_COMP = 0x0,
XSC_EVENT_TYPE_COMM_EST = 0x02,//mad
@@ -633,4 +637,9 @@ static inline u8 xsc_get_user_mode(struct xsc_core_device *xdev)
return xdev->user_mode;
}
+static inline u8 get_cqe_opcode(struct xsc_cqe *cqe)
+{
+ return cqe->msg_opcode;
+}
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
index 8b8398cfb..de4dd9e54 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
@@ -21,6 +21,8 @@
#define XSC_ETH_RX_MAX_HEAD_ROOM 256
#define XSC_SW2HW_RX_PKT_LEN(mtu) ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM)
+#define XSC_RX_MAX_HEAD (256)
+
#define XSC_QPN_SQN_STUB 1025
#define XSC_QPN_RQN_STUB 1024
@@ -145,6 +147,24 @@ enum channel_flags {
XSC_CHANNEL_NAPI_SCHED = 1,
};
+enum xsc_eth_priv_flag {
+ XSC_PFLAG_RX_NO_CSUM_COMPLETE,
+ XSC_PFLAG_SNIFFER,
+ XSC_PFLAG_DROPLESS_RQ,
+ XSC_PFLAG_RX_COPY_BREAK,
+ XSC_NUM_PFLAGS, /* Keep last */
+};
+
+#define XSC_SET_PFLAG(params, pflag, enable) \
+ do { \
+ if (enable) \
+ (params)->pflags |= BIT(pflag); \
+ else \
+ (params)->pflags &= ~(BIT(pflag)); \
+ } while (0)
+
+#define XSC_GET_PFLAG(params, pflag) (!!((params)->pflags & (BIT(pflag))))
+
struct xsc_eth_params {
u16 num_channels;
u16 max_num_ch;
@@ -208,4 +228,12 @@ union xsc_send_doorbell {
u32 send_data;
};
+union xsc_recv_doorbell {
+ struct{
+ s32 next_pid : 13;
+ u32 qp_num : 15;
+ };
+ u32 recv_data;
+};
+
#endif
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
index 286e5f9eb..9926423c9 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
@@ -33,10 +33,572 @@
* SOFTWARE.
*/
+#include <linux/net_tstamp.h>
+#include "xsc_eth.h"
#include "xsc_eth_txrx.h"
+#include "xsc_eth_common.h"
+#include <linux/device.h>
+#include "common/xsc_pp.h"
+#include "xsc_pph.h"
+
+#define PAGE_REF_ELEV (U16_MAX)
+/* Upper bound on number of packets that share a single page */
+#define PAGE_REF_THRSD (PAGE_SIZE / 64)
+
+static inline void xsc_rq_notify_hw(struct xsc_rq *rq)
+{
+ struct xsc_core_device *xdev = rq->cq.xdev;
+ struct xsc_wq_cyc *wq = &rq->wqe.wq;
+ union xsc_recv_doorbell doorbell_value;
+ u64 rqwqe_id = wq->wqe_ctr << (ilog2(xdev->caps.recv_ds_num));
+
+ /*reverse wqe index to ds index*/
+ doorbell_value.next_pid = rqwqe_id;
+ doorbell_value.qp_num = rq->rqn;
+
+ /* Make sure that descriptors are written before
+ * updating doorbell record and ringing the doorbell
+ */
+ wmb();
+ writel(doorbell_value.recv_data, REG_ADDR(xdev, xdev->regs.rx_db));
+}
+
+static inline void xsc_skb_set_hash(struct xsc_adapter *adapter,
+ struct xsc_cqe *cqe,
+ struct sk_buff *skb)
+{
+ struct xsc_rss_params *rss = &adapter->rss_param;
+ u32 hash_field;
+ bool l3_hash = false;
+ bool l4_hash = false;
+ int ht = 0;
+
+ if (adapter->netdev->features & NETIF_F_RXHASH) {
+ if (skb->protocol == htons(ETH_P_IP)) {
+ hash_field = rss->rx_hash_fields[XSC_TT_IPV4_TCP];
+ if (hash_field & XSC_HASH_FIELD_SEL_SRC_IP ||
+ hash_field & XSC_HASH_FIELD_SEL_DST_IP)
+ l3_hash = true;
+
+ if (hash_field & XSC_HASH_FIELD_SEL_SPORT ||
+ hash_field & XSC_HASH_FIELD_SEL_DPORT)
+ l4_hash = true;
+ } else if (skb->protocol == htons(ETH_P_IPV6)) {
+ hash_field = rss->rx_hash_fields[XSC_TT_IPV6_TCP];
+ if (hash_field & XSC_HASH_FIELD_SEL_SRC_IPV6 ||
+ hash_field & XSC_HASH_FIELD_SEL_DST_IPV6)
+ l3_hash = true;
+
+ if (hash_field & XSC_HASH_FIELD_SEL_SPORT_V6 ||
+ hash_field & XSC_HASH_FIELD_SEL_DPORT_V6)
+ l4_hash = true;
+ }
+
+ if (l3_hash && l4_hash)
+ ht = PKT_HASH_TYPE_L4;
+ else if (l3_hash)
+ ht = PKT_HASH_TYPE_L3;
+ if (ht)
+ skb_set_hash(skb, be32_to_cpu(cqe->vni), ht);
+ }
+}
+
+static inline unsigned short from32to16(unsigned int x)
+{
+ /* add up 16-bit and 16-bit for 16+c bit */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up carry.. */
+ x = (x & 0xffff) + (x >> 16);
+ return x;
+}
+
+static inline void xsc_handle_csum(struct xsc_cqe *cqe, struct xsc_rq *rq,
+ struct sk_buff *skb, struct xsc_wqe_frag_info *wi)
+{
+ struct xsc_channel *c = rq->cq.channel;
+ struct net_device *netdev = c->adapter->netdev;
+ struct xsc_dma_info *dma_info = wi->di;
+ int offset_from = wi->offset;
+ struct epp_pph *hw_pph = page_address(dma_info->page) + offset_from;
+
+ if (unlikely((netdev->features & NETIF_F_RXCSUM) == 0))
+ goto csum_none;
+
+ if (unlikely(XSC_GET_EPP2SOC_PPH_ERROR_BITMAP(hw_pph) & PACKET_UNKNOWN))
+ goto csum_none;
+
+ if (XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) &&
+ (!(cqe->csum_err & OUTER_AND_INNER))) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ skb->csum_level = 1;
+ skb->encapsulation = 1;
+ } else if (XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) &&
+ (!(cqe->csum_err & OUTER_BIT) && (cqe->csum_err & INNER_BIT))) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ skb->csum_level = 0;
+ skb->encapsulation = 1;
+ } else if (!XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) &&
+ (!(cqe->csum_err & OUTER_BIT))) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ }
+
+ goto out;
+
+csum_none:
+ skb->csum = 0;
+ skb->ip_summed = CHECKSUM_NONE;
+out:
+ return;
+}
+
+static inline void xsc_build_rx_skb(struct xsc_cqe *cqe,
+ u32 cqe_bcnt,
+ struct xsc_rq *rq,
+ struct sk_buff *skb,
+ struct xsc_wqe_frag_info *wi)
+{
+ struct xsc_channel *c = rq->cq.channel;
+ struct net_device *netdev = c->netdev;
+ struct xsc_adapter *adapter = c->adapter;
+
+ skb->mac_len = ETH_HLEN;
+
+ skb_record_rx_queue(skb, rq->ix);
+ xsc_handle_csum(cqe, rq, skb, wi);
+
+ skb->protocol = eth_type_trans(skb, netdev);
+ xsc_skb_set_hash(adapter, cqe, skb);
+}
+
+static inline void xsc_complete_rx_cqe(struct xsc_rq *rq,
+ struct xsc_cqe *cqe,
+ u32 cqe_bcnt,
+ struct sk_buff *skb,
+ struct xsc_wqe_frag_info *wi)
+{
+ xsc_build_rx_skb(cqe, cqe_bcnt, rq, skb, wi);
+}
+
+static inline void xsc_add_skb_frag(struct xsc_rq *rq,
+ struct sk_buff *skb,
+ struct xsc_dma_info *di,
+ u32 frag_offset, u32 len,
+ unsigned int truesize)
+{
+ struct xsc_channel *c = rq->cq.channel;
+ struct device *dev = c->adapter->dev;
+
+ dma_sync_single_for_cpu(dev, di->addr + frag_offset, len, DMA_FROM_DEVICE);
+ page_ref_inc(di->page);
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ di->page, frag_offset, len, truesize);
+}
+
+static inline void xsc_copy_skb_header(struct device *dev,
+ struct sk_buff *skb,
+ struct xsc_dma_info *dma_info,
+ int offset_from, u32 headlen)
+{
+ void *from = page_address(dma_info->page) + offset_from;
+ /* Aligning len to sizeof(long) optimizes memcpy performance */
+ unsigned int len = ALIGN(headlen, sizeof(long));
+
+ dma_sync_single_for_cpu(dev, dma_info->addr + offset_from, len,
+ DMA_FROM_DEVICE);
+ skb_copy_to_linear_data(skb, from, len);
+}
+
+static inline struct sk_buff *xsc_build_linear_skb(struct xsc_rq *rq, void *va,
+ u32 frag_size, u16 headroom,
+ u32 cqe_bcnt)
+{
+ struct sk_buff *skb = build_skb(va, frag_size);
+
+ if (unlikely(!skb))
+ return NULL;
+
+ skb_reserve(skb, headroom);
+ skb_put(skb, cqe_bcnt);
+
+ return skb;
+}
+
+struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq,
+ struct xsc_wqe_frag_info *wi,
+ u32 cqe_bcnt, u8 has_pph)
+{
+ struct xsc_dma_info *di = wi->di;
+ u16 rx_headroom = rq->buff.headroom;
+ int pph_len = has_pph ? XSC_PPH_HEAD_LEN : 0;
+ struct sk_buff *skb;
+ void *va, *data;
+ u32 frag_size;
+
+ va = page_address(di->page) + wi->offset;
+ data = va + rx_headroom + pph_len;
+ frag_size = XSC_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
+
+ dma_sync_single_range_for_cpu(rq->cq.xdev->device, di->addr, wi->offset,
+ frag_size, DMA_FROM_DEVICE);
+ prefetchw(va); /* xdp_frame data area */
+ prefetch(data);
+
+ skb = xsc_build_linear_skb(rq, va, frag_size, (rx_headroom + pph_len),
+ (cqe_bcnt - pph_len));
+ if (unlikely(!skb))
+ return NULL;
+
+ /* queue up for recycling/reuse */
+ page_ref_inc(di->page);
+
+ return skb;
+}
+
+struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq,
+ struct xsc_wqe_frag_info *wi,
+ u32 cqe_bcnt, u8 has_pph)
+{
+ struct xsc_rq_frag_info *frag_info = &rq->wqe.info.arr[0];
+ struct xsc_wqe_frag_info *head_wi = wi;
+ struct xsc_wqe_frag_info *rx_wi = wi;
+ u16 headlen = min_t(u32, XSC_RX_MAX_HEAD, cqe_bcnt);
+ u16 frag_headlen = headlen;
+ u16 byte_cnt = cqe_bcnt - headlen;
+ struct sk_buff *skb;
+ struct xsc_channel *c = rq->cq.channel;
+ struct device *dev = c->adapter->dev;
+ struct net_device *netdev = c->adapter->netdev;
+ u8 fragcnt = 0;
+ u16 head_offset = head_wi->offset;
+ u16 frag_consumed_bytes = 0;
+ int i = 0;
+
+#ifndef NEED_CREATE_RX_THREAD
+ skb = napi_alloc_skb(rq->cq.napi, ALIGN(XSC_RX_MAX_HEAD, sizeof(long)));
+#else
+ skb = netdev_alloc_skb(netdev, ALIGN(XSC_RX_MAX_HEAD, sizeof(long)));
+#endif
+ if (unlikely(!skb))
+ return NULL;
+
+ prefetchw(skb->data);
+
+ if (likely(has_pph)) {
+ headlen = min_t(u32, XSC_RX_MAX_HEAD, (cqe_bcnt - XSC_PPH_HEAD_LEN));
+ frag_headlen = headlen + XSC_PPH_HEAD_LEN;
+ byte_cnt = cqe_bcnt - headlen - XSC_PPH_HEAD_LEN;
+ head_offset += XSC_PPH_HEAD_LEN;
+ }
+
+ if (byte_cnt == 0 && (XSC_GET_PFLAG(&c->adapter->nic_param, XSC_PFLAG_RX_COPY_BREAK))) {
+ for (i = 0; i < rq->wqe.info.num_frags; i++, wi++)
+ wi->is_available = 1;
+ goto ret;
+ }
+
+ for (i = 0; i < rq->wqe.info.num_frags; i++, rx_wi++)
+ rx_wi->is_available = 0;
+
+ while (byte_cnt) {
+ /*figure out whether the first fragment can be a page ?*/
+ frag_consumed_bytes =
+ min_t(u16, frag_info->frag_size - frag_headlen, byte_cnt);
+
+ xsc_add_skb_frag(rq, skb, wi->di, wi->offset + frag_headlen,
+ frag_consumed_bytes, frag_info->frag_stride);
+ byte_cnt -= frag_consumed_bytes;
+
+ /*to protect extend wqe read, drop exceed bytes*/
+ frag_headlen = 0;
+ fragcnt++;
+ if (fragcnt == rq->wqe.info.num_frags) {
+ if (byte_cnt) {
+ netdev_warn(netdev,
+ "large packet reach the maximum rev-wqe num.\n");
+ netdev_warn(netdev,
+ "%u bytes dropped: frag_num=%d, headlen=%d, cqe_cnt=%d, frag0_bytes=%d, frag_size=%d\n",
+ byte_cnt, fragcnt, headlen, cqe_bcnt,
+ frag_consumed_bytes, frag_info->frag_size);
+ }
+ break;
+ }
+
+ frag_info++;
+ wi++;
+ }
+
+ret:
+ /* copy header */
+ xsc_copy_skb_header(dev, skb, head_wi->di, head_offset, headlen);
+
+ /* skb linear part was allocated with headlen and aligned to long */
+ skb->tail += headlen;
+ skb->len += headlen;
+
+ return skb;
+}
+
+static inline bool xsc_rx_cache_is_empty(struct xsc_page_cache *cache)
+{
+ return cache->head == cache->tail;
+}
+
+static inline bool xsc_page_is_reserved(struct page *page)
+{
+ return page_is_pfmemalloc(page) || page_to_nid(page) != numa_mem_id();
+}
+
+static void xsc_page_dma_unmap(struct xsc_rq *rq, struct xsc_dma_info *dma_info)
+{
+ struct xsc_channel *c = rq->cq.channel;
+ struct device *dev = c->adapter->dev;
+
+ dma_unmap_page(dev, dma_info->addr, XSC_RX_FRAG_SZ, rq->buff.map_dir);
+}
+
+static void xsc_page_release_dynamic(struct xsc_rq *rq,
+ struct xsc_dma_info *dma_info, bool recycle)
+{
+ xsc_page_dma_unmap(rq, dma_info);
+ page_pool_recycle_direct(rq->page_pool, dma_info->page);
+}
+
+static inline void xsc_put_rx_frag(struct xsc_rq *rq,
+ struct xsc_wqe_frag_info *frag, bool recycle)
+{
+ if (frag->last_in_page)
+ xsc_page_release_dynamic(rq, frag->di, recycle);
+}
+
+static inline struct xsc_wqe_frag_info *get_frag(struct xsc_rq *rq, u16 ix)
+{
+ return &rq->wqe.frags[ix << rq->wqe.info.log_num_frags];
+}
+
+static inline void xsc_free_rx_wqe(struct xsc_rq *rq,
+ struct xsc_wqe_frag_info *wi, bool recycle)
+{
+ int i;
+
+ for (i = 0; i < rq->wqe.info.num_frags; i++, wi++) {
+ if (wi->is_available && recycle)
+ continue;
+ xsc_put_rx_frag(rq, wi, recycle);
+ }
+}
+
+static void xsc_dump_error_rqcqe(struct xsc_rq *rq,
+ struct xsc_cqe *cqe)
+{
+ struct xsc_channel *c = rq->cq.channel;
+ struct net_device *netdev = c->adapter->netdev;
+ u32 ci = xsc_cqwq_get_ci(&rq->cq.wq);
+
+ net_err_ratelimited("Error cqe on dev=%s, cqn=%d, ci=%d, rqn=%d, qpn=%d, error_code=0x%x\n",
+ netdev->name, rq->cq.xcq.cqn, ci,
+ rq->rqn, cqe->qp_id, get_cqe_opcode(cqe));
+}
+
+void xsc_eth_handle_rx_cqe(struct xsc_cqwq *cqwq,
+ struct xsc_rq *rq, struct xsc_cqe *cqe)
+{
+ struct xsc_wq_cyc *wq = &rq->wqe.wq;
+ struct xsc_channel *c = rq->cq.channel;
+ u8 cqe_opcode = get_cqe_opcode(cqe);
+ struct xsc_wqe_frag_info *wi;
+ struct sk_buff *skb;
+ u32 cqe_bcnt;
+ u16 ci;
+
+ ci = xsc_wq_cyc_ctr2ix(wq, cqwq->cc);
+ wi = get_frag(rq, ci);
+ if (unlikely(cqe_opcode & BIT(7))) {
+ xsc_dump_error_rqcqe(rq, cqe);
+ goto free_wqe;
+ }
+
+ cqe_bcnt = le32_to_cpu(cqe->msg_len);
+ if (cqe->has_pph && cqe_bcnt <= XSC_PPH_HEAD_LEN)
+ goto free_wqe;
+
+ if (unlikely(cqe_bcnt > rq->frags_sz)) {
+ if (!XSC_GET_PFLAG(&c->adapter->nic_param, XSC_PFLAG_DROPLESS_RQ))
+ goto free_wqe;
+ }
+
+ cqe_bcnt = min_t(u32, cqe_bcnt, rq->frags_sz);
+ skb = rq->wqe.skb_from_cqe(rq, wi, cqe_bcnt, cqe->has_pph);
+ if (!skb)
+ goto free_wqe;
+
+ xsc_complete_rx_cqe(rq, cqe,
+ cqe->has_pph == 1 ? cqe_bcnt - XSC_PPH_HEAD_LEN : cqe_bcnt,
+ skb, wi);
+
+ napi_gro_receive(rq->cq.napi, skb);
+
+free_wqe:
+ xsc_free_rx_wqe(rq, wi, true);
+ xsc_wq_cyc_pop(wq);
+}
int xsc_poll_rx_cq(struct xsc_cq *cq, int budget)
{
- // TBD
+ struct xsc_rq *rq = container_of(cq, struct xsc_rq, cq);
+ struct xsc_cqwq *cqwq = &cq->wq;
+ struct xsc_cqe *cqe;
+ int work_done = 0;
+
+ if (!test_bit(XSC_ETH_RQ_STATE_ENABLED, &rq->state))
+ return 0;
+
+ while ((work_done < budget) && (cqe = xsc_cqwq_get_cqe(cqwq))) {
+ rq->handle_rx_cqe(cqwq, rq, cqe);
+ ++work_done;
+
+ xsc_cqwq_pop(cqwq);
+ }
+
+ if (!work_done)
+ goto out;
+
+ xsc_cq_notify_hw(cq);
+ /* ensure cq space is freed before enabling more cqes */
+ wmb();
+
+out:
+
+ return work_done;
+}
+
+static inline int xsc_page_alloc_mapped(struct xsc_rq *rq,
+ struct xsc_dma_info *dma_info)
+{
+ struct xsc_channel *c = rq->cq.channel;
+ struct device *dev = c->adapter->dev;
+
+ dma_info->page = page_pool_dev_alloc_pages(rq->page_pool);
+ if (unlikely(!dma_info->page))
+ return -ENOMEM;
+
+ dma_info->addr = dma_map_page(dev, dma_info->page, 0,
+ XSC_RX_FRAG_SZ, rq->buff.map_dir);
+ if (unlikely(dma_mapping_error(dev, dma_info->addr))) {
+ page_pool_recycle_direct(rq->page_pool, dma_info->page);
+ dma_info->page = NULL;
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static inline int xsc_get_rx_frag(struct xsc_rq *rq,
+ struct xsc_wqe_frag_info *frag)
+{
+ int err = 0;
+
+ if (!frag->offset && !frag->is_available)
+ /* On first frag (offset == 0), replenish page (dma_info actually).
+ * Other frags that point to the same dma_info (with a different
+ * offset) should just use the new one without replenishing again
+ * by themselves.
+ */
+ err = xsc_page_alloc_mapped(rq, frag->di);
+
+ return err;
+}
+
+static int xsc_alloc_rx_wqe(struct xsc_rq *rq, struct xsc_eth_rx_wqe_cyc *wqe, u16 ix)
+{
+ struct xsc_wqe_frag_info *frag = get_frag(rq, ix);
+ u64 addr;
+ int i;
+ int err;
+
+ for (i = 0; i < rq->wqe.info.num_frags; i++, frag++) {
+ err = xsc_get_rx_frag(rq, frag);
+ if (unlikely(err))
+ goto free_frags;
+
+ addr = cpu_to_le64(frag->di->addr + frag->offset + rq->buff.headroom);
+ wqe->data[i].va = addr;
+ }
+
return 0;
+
+free_frags:
+ while (--i >= 0)
+ xsc_put_rx_frag(rq, --frag, true);
+
+ return err;
+}
+
+void xsc_eth_dealloc_rx_wqe(struct xsc_rq *rq, u16 ix)
+{
+ struct xsc_wqe_frag_info *wi = get_frag(rq, ix);
+
+ xsc_free_rx_wqe(rq, wi, false);
+}
+
+static int xsc_alloc_rx_wqes(struct xsc_rq *rq, u16 ix, u8 wqe_bulk)
+{
+ struct xsc_wq_cyc *wq = &rq->wqe.wq;
+ struct xsc_eth_rx_wqe_cyc *wqe;
+ int err;
+ int i;
+ int idx;
+
+ for (i = 0; i < wqe_bulk; i++) {
+ idx = xsc_wq_cyc_ctr2ix(wq, (ix + i));
+ wqe = xsc_wq_cyc_get_wqe(wq, idx);
+
+ err = xsc_alloc_rx_wqe(rq, wqe, idx);
+ if (unlikely(err))
+ goto free_wqes;
+ }
+
+ return 0;
+
+free_wqes:
+ while (--i >= 0)
+ xsc_eth_dealloc_rx_wqe(rq, ix + i);
+
+ return err;
+}
+
+bool xsc_eth_post_rx_wqes(struct xsc_rq *rq)
+{
+ struct xsc_wq_cyc *wq = &rq->wqe.wq;
+ u8 wqe_bulk, wqe_bulk_min;
+ int alloc;
+ u16 head;
+ int err;
+
+ wqe_bulk = rq->wqe.info.wqe_bulk;
+ wqe_bulk_min = rq->wqe.info.wqe_bulk_min;
+ if (xsc_wq_cyc_missing(wq) < wqe_bulk)
+ return false;
+
+ do {
+ head = xsc_wq_cyc_get_head(wq);
+
+ alloc = min_t(int, wqe_bulk, xsc_wq_cyc_missing(wq));
+ if (alloc < wqe_bulk && alloc >= wqe_bulk_min)
+ alloc = alloc & 0xfffffffe;
+
+ if (alloc > 0) {
+ err = xsc_alloc_rx_wqes(rq, head, alloc);
+ if (unlikely(err))
+ break;
+
+ xsc_wq_cyc_push_n(wq, alloc);
+ }
+ } while (xsc_wq_cyc_missing(wq) >= wqe_bulk_min);
+
+ dma_wmb();
+
+ /* ensure wqes are visible to device before updating doorbell record */
+ xsc_rq_notify_hw(rq);
+
+ return !!err;
}
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c
index 2dd4aa3cb..e75610dbf 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c
@@ -41,10 +41,96 @@ static inline bool xsc_channel_no_affinity_change(struct xsc_channel *c)
return cpumask_test_cpu(current_cpu, c->aff_mask);
}
+static void xsc_dump_error_sqcqe(struct xsc_sq *sq,
+ struct xsc_cqe *cqe)
+{
+ u32 ci = xsc_cqwq_get_ci(&sq->cq.wq);
+ struct net_device *netdev = sq->channel->netdev;
+
+ net_err_ratelimited("Err cqe on dev %s cqn=0x%x ci=0x%x sqn=0x%x err_code=0x%x qpid=0x%x\n",
+ netdev->name, sq->cq.xcq.cqn, ci,
+ sq->sqn, get_cqe_opcode(cqe), cqe->qp_id);
+}
+
static bool xsc_poll_tx_cq(struct xsc_cq *cq, int napi_budget)
{
- // TBD
- return true;
+ struct xsc_adapter *adapter;
+ struct device *dev;
+ struct xsc_sq *sq;
+ struct xsc_cqe *cqe;
+ u32 dma_fifo_cc;
+ u32 nbytes = 0;
+ u16 npkts = 0;
+ u16 sqcc;
+ int i = 0;
+
+ sq = container_of(cq, struct xsc_sq, cq);
+ if (!test_bit(XSC_ETH_SQ_STATE_ENABLED, &sq->state))
+ return false;
+
+ adapter = sq->channel->adapter;
+ dev = adapter->dev;
+
+ cqe = xsc_cqwq_get_cqe(&cq->wq);
+ if (!cqe)
+ goto out;
+
+ if (unlikely(get_cqe_opcode(cqe) & BIT(7))) {
+ xsc_dump_error_sqcqe(sq, cqe);
+ return false;
+ }
+
+ sqcc = sq->cc;
+
+ /* avoid dirtying sq cache line every cqe */
+ dma_fifo_cc = sq->dma_fifo_cc;
+ i = 0;
+ do {
+ struct xsc_tx_wqe_info *wi;
+ struct sk_buff *skb;
+ int j;
+ u16 ci;
+
+ xsc_cqwq_pop(&cq->wq);
+
+ ci = xsc_wq_cyc_ctr2ix(&sq->wq, sqcc);
+ wi = &sq->db.wqe_info[ci];
+ skb = wi->skb;
+
+ /*cqe may be overstanding in real test, not by nop in other*/
+ if (unlikely(!skb))
+ continue;
+
+ for (j = 0; j < wi->num_dma; j++) {
+ struct xsc_sq_dma *dma = xsc_dma_get(sq, dma_fifo_cc++);
+
+ xsc_tx_dma_unmap(dev, dma);
+ }
+
+ npkts++;
+ nbytes += wi->num_bytes;
+ sqcc += wi->num_wqebbs;
+ napi_consume_skb(skb, 0);
+
+ } while ((++i <= napi_budget) && (cqe = xsc_cqwq_get_cqe(&cq->wq)));
+
+ xsc_cq_notify_hw(cq);
+
+ /* ensure cq space is freed before enabling more cqes */
+ wmb();
+
+ sq->dma_fifo_cc = dma_fifo_cc;
+ sq->cc = sqcc;
+
+ netdev_tx_completed_queue(sq->txq, npkts, nbytes);
+
+ if (netif_tx_queue_stopped(sq->txq) &&
+ xsc_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room)) {
+ netif_tx_wake_queue(sq->txq);
+ }
+
+out:
+ return (i == napi_budget);
}
int xsc_eth_napi_poll(struct napi_struct *napi, int budget)
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
index bc5932812..cc0dd8f0b 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
@@ -59,4 +59,32 @@ static inline bool xsc_wqc_has_room_for(struct xsc_wq_cyc *wq,
return (xsc_wq_cyc_ctr2ix(wq, cc - pc) >= n) || (cc == pc);
}
+static inline struct xsc_cqe *xsc_cqwq_get_cqe_buff(struct xsc_cqwq *wq, u32 ix)
+{
+ struct xsc_cqe *cqe = xsc_frag_buf_get_wqe(&wq->fbc, ix);
+
+ return cqe;
+}
+
+static inline struct xsc_cqe *xsc_cqwq_get_cqe(struct xsc_cqwq *wq)
+{
+ struct xsc_cqe *cqe;
+ u8 cqe_ownership_bit;
+ u8 sw_ownership_val;
+ u32 ci = xsc_cqwq_get_ci(wq);
+
+ cqe = xsc_cqwq_get_cqe_buff(wq, ci);
+
+ cqe_ownership_bit = cqe->owner & XSC_CQE_OWNER_MASK;
+ sw_ownership_val = xsc_cqwq_get_wrap_cnt(wq) & 1;
+
+ if (cqe_ownership_bit != sw_ownership_val)
+ return NULL;
+
+ /* ensure cqe content is read after cqe ownership bit */
+ dma_rmb();
+
+ return cqe;
+}
+
#endif /* XSC_RXTX_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 14/16] net-next/yunsilicon: add ndo_get_stats64
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (12 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 13/16] net-next/yunsilicon: Add eth rx Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 15/16] net-next/yunsilicon: Add ndo_set_mac_address Xin Tian
` (2 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Support nic stats
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +-
.../net/ethernet/yunsilicon/xsc/net/main.c | 24 ++++++++++-
.../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 3 ++
.../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 4 ++
.../yunsilicon/xsc/net/xsc_eth_stats.c | 42 +++++++++++++++++++
.../yunsilicon/xsc/net/xsc_eth_stats.h | 33 +++++++++++++++
.../ethernet/yunsilicon/xsc/net/xsc_eth_tx.c | 5 +++
.../ethernet/yunsilicon/xsc/net/xsc_queue.h | 2 +
8 files changed, 112 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
index 7cfc2aaa2..e1cfa3cdf 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
@@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o
-xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o
+xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o xsc_eth_stats.o
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
index 338e308a6..0c6e949b5 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
@@ -541,12 +541,15 @@ static int xsc_eth_open_qp_sq(struct xsc_channel *c,
struct xsc_core_device *xdev = adapter->xdev;
u8 q_log_size = psq_param->sq_attr.q_log_size;
u8 ele_log_size = psq_param->sq_attr.ele_log_size;
+ struct xsc_stats *stats = adapter->stats;
+ struct xsc_channel_stats *channel_stats = &stats->channel_stats[c->chl_idx];
struct xsc_create_qp_mbox_in *in;
struct xsc_modify_raw_qp_mbox_in *modify_in;
int hw_npages;
int inlen;
int ret;
+ psq->stats = &channel_stats->sq[sq_idx];
psq_param->wq.db_numa_node = cpu_to_node(c->cpu);
ret = xsc_eth_wq_cyc_create(xdev, &psq_param->wq,
@@ -845,10 +848,13 @@ static int xsc_eth_alloc_rq(struct xsc_channel *c,
struct page_pool_params pagepool_params = { 0 };
u32 pool_size = 1 << q_log_size;
u8 ele_log_size = prq_param->rq_attr.ele_log_size;
+ struct xsc_stats *stats = c->adapter->stats;
+ struct xsc_channel_stats *channel_stats = &stats->channel_stats[c->chl_idx];
int wq_sz;
int i, f;
int ret = 0;
+ prq->stats = &channel_stats->rq;
prq_param->wq.db_numa_node = cpu_to_node(c->cpu);
ret = xsc_eth_wq_cyc_create(c->adapter->xdev, &prq_param->wq,
@@ -1634,6 +1640,13 @@ static int xsc_eth_close(struct net_device *netdev)
return ret;
}
+static void xsc_eth_get_stats(struct net_device *netdev, struct rtnl_link_stats64 *stats)
+{
+ struct xsc_adapter *adapter = netdev_priv(netdev);
+
+ xsc_eth_fold_sw_stats64(adapter, stats);
+}
+
static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz)
{
struct xsc_set_mtu_mbox_in in;
@@ -1663,6 +1676,7 @@ static const struct net_device_ops xsc_netdev_ops = {
.ndo_open = xsc_eth_open,
.ndo_stop = xsc_eth_close,
.ndo_start_xmit = xsc_eth_xmit_start,
+ .ndo_get_stats64 = xsc_eth_get_stats,
};
static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter)
@@ -1872,16 +1886,22 @@ static void *xsc_eth_add(struct xsc_core_device *xdev)
goto err_cleanup_netdev;
}
+ adapter->stats = kvzalloc(sizeof(*adapter->stats), GFP_KERNEL);
+ if (!adapter->stats)
+ goto err_detach;
+
err = register_netdev(netdev);
if (err) {
xsc_core_warn(xdev, "register_netdev failed, err=%d\n", err);
- goto err_detach;
+ goto err_free_stats;
}
xdev->netdev = (void *)netdev;
return adapter;
+err_free_stats:
+ kfree(adapter->stats);
err_detach:
xsc_eth_detach(xdev, adapter);
err_cleanup_netdev:
@@ -1908,7 +1928,7 @@ static void xsc_eth_remove(struct xsc_core_device *xdev, void *context)
xsc_core_info(adapter->xdev, "remove netdev %s entry\n", adapter->netdev->name);
unregister_netdev(adapter->netdev);
-
+ kfree(adapter->stats);
free_netdev(adapter->netdev);
xdev->netdev = NULL;
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
index 3e5614eb1..45d8a8cbe 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
@@ -9,6 +9,7 @@
#include <linux/udp.h>
#include "common/xsc_device.h"
#include "xsc_eth_common.h"
+#include "xsc_eth_stats.h"
#define XSC_INVALID_LKEY 0x100
@@ -48,6 +49,8 @@ struct xsc_adapter {
u32 status;
struct mutex status_lock; // protect status
+
+ struct xsc_stats *stats;
};
#endif /* XSC_ETH_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
index 9926423c9..bbc8add75 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
@@ -176,6 +176,10 @@ static inline void xsc_complete_rx_cqe(struct xsc_rq *rq,
struct sk_buff *skb,
struct xsc_wqe_frag_info *wi)
{
+ struct xsc_rq_stats *stats = rq->stats;
+
+ stats->packets++;
+ stats->bytes += cqe_bcnt;
xsc_build_rx_skb(cqe, cqe_bcnt, rq, skb, wi);
}
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c
new file mode 100644
index 000000000..10a014237
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c
@@ -0,0 +1,42 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#include "xsc_eth_stats.h"
+#include "xsc_eth.h"
+
+static inline int xsc_get_netdev_max_channels(struct xsc_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+
+ return min_t(unsigned int, netdev->num_rx_queues,
+ netdev->num_tx_queues);
+}
+
+static inline int xsc_get_netdev_max_tc(struct xsc_adapter *adapter)
+{
+ return adapter->nic_param.num_tc;
+}
+
+void xsc_eth_fold_sw_stats64(struct xsc_adapter *adapter, struct rtnl_link_stats64 *s)
+{
+ int i, j;
+
+ for (i = 0; i < xsc_get_netdev_max_channels(adapter); i++) {
+ struct xsc_channel_stats *channel_stats = &adapter->stats->channel_stats[i];
+ struct xsc_rq_stats *rq_stats = &channel_stats->rq;
+
+ s->rx_packets += rq_stats->packets;
+ s->rx_bytes += rq_stats->bytes;
+
+ for (j = 0; j < xsc_get_netdev_max_tc(adapter); j++) {
+ struct xsc_sq_stats *sq_stats = &channel_stats->sq[j];
+
+ s->tx_packets += sq_stats->packets;
+ s->tx_bytes += sq_stats->bytes;
+ s->tx_dropped += sq_stats->dropped;
+ }
+ }
+}
+
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h
new file mode 100644
index 000000000..8e97b5507
--- /dev/null
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
+ * All rights reserved.
+ */
+
+#ifndef XSC_EN_STATS_H
+#define XSC_EN_STATS_H
+
+#include "xsc_eth_common.h"
+
+struct xsc_rq_stats {
+ u64 packets;
+ u64 bytes;
+};
+
+struct xsc_sq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 dropped;
+};
+
+struct xsc_channel_stats {
+ struct xsc_sq_stats sq[XSC_MAX_NUM_TC];
+ struct xsc_rq_stats rq;
+} ____cacheline_aligned_in_smp;
+
+struct xsc_stats {
+ struct xsc_channel_stats channel_stats[XSC_ETH_MAX_NUM_CHANNELS];
+};
+
+void xsc_eth_fold_sw_stats64(struct xsc_adapter *adapter, struct rtnl_link_stats64 *s);
+
+#endif /* XSC_EN_STATS_H */
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c
index 67e68485e..2aee1d97c 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c
@@ -201,6 +201,7 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb,
struct xsc_send_wqe_ctrl_seg *cseg;
struct xsc_wqe_data_seg *dseg;
struct xsc_tx_wqe_info *wi;
+ struct xsc_sq_stats *stats = sq->stats;
struct xsc_core_device *xdev = sq->cq.xdev;
u16 ds_cnt;
u16 mss, ihs, headlen;
@@ -219,11 +220,13 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb,
mss = skb_shinfo(skb)->gso_size;
ihs = xsc_tx_get_gso_ihs(sq, skb);
num_bytes = skb->len;
+ stats->packets += skb_shinfo(skb)->gso_segs;
} else {
opcode = XSC_OPCODE_RAW;
mss = 0;
ihs = 0;
num_bytes = skb->len;
+ stats->packets++;
}
/*linear data in skb*/
@@ -261,10 +264,12 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb,
xsc_txwqe_complete(sq, skb, opcode, ds_cnt, num_wqebbs, num_bytes,
num_dma, wi);
+ stats->bytes += num_bytes;
return NETDEV_TX_OK;
err_drop:
+ stats->dropped++;
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
index a6f0ec807..43f947f43 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
@@ -156,6 +156,7 @@ struct xsc_rq {
unsigned long state;
struct work_struct recover_work;
+ struct xsc_rq_stats *stats;
u32 hw_mtu;
u32 frags_sz;
@@ -204,6 +205,7 @@ struct xsc_sq {
/* read only */
struct xsc_wq_cyc wq;
u32 dma_fifo_mask;
+ struct xsc_sq_stats *stats;
struct {
struct xsc_sq_dma *dma_fifo;
struct xsc_tx_wqe_info *wqe_info;
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 15/16] net-next/yunsilicon: Add ndo_set_mac_address
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (13 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 14/16] net-next/yunsilicon: add ndo_get_stats64 Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 10:50 ` [PATCH v1 16/16] net-next/yunsilicon: Add change mtu Xin Tian
2024-12-19 0:35 ` [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Jakub Kicinski
16 siblings, 0 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add ndo_set_mac_address
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 2 +
.../net/ethernet/yunsilicon/xsc/net/main.c | 22 ++++++
.../net/ethernet/yunsilicon/xsc/pci/vport.c | 72 +++++++++++++++++++
3 files changed, 96 insertions(+)
diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
index d69be5352..5c60b3126 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
@@ -612,6 +612,8 @@ int xsc_register_interface(struct xsc_interface *intf);
void xsc_unregister_interface(struct xsc_interface *intf);
u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport);
+int xsc_core_modify_nic_vport_mac_address(struct xsc_core_device *xdev,
+ u16 vport, u8 *addr, bool perm_mac);
static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset)
{
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
index 0c6e949b5..6df7ed3bb 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
@@ -1647,6 +1647,27 @@ static void xsc_eth_get_stats(struct net_device *netdev, struct rtnl_link_stats6
xsc_eth_fold_sw_stats64(adapter, stats);
}
+static int xsc_eth_set_mac(struct net_device *netdev, void *addr)
+{
+ struct xsc_adapter *adapter = netdev_priv(netdev);
+ struct sockaddr *saddr = addr;
+ struct xsc_core_device *xdev = adapter->xdev;
+ int ret;
+
+ if (!is_valid_ether_addr(saddr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ ret = xsc_core_modify_nic_vport_mac_address(xdev, 0, saddr->sa_data, false);
+ if (ret)
+ xsc_core_err(adapter->xdev, "%s: xsc set mac addr failed\n", __func__);
+
+ netif_addr_lock_bh(netdev);
+ eth_hw_addr_set(netdev, saddr->sa_data);
+ netif_addr_unlock_bh(netdev);
+
+ return 0;
+}
+
static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz)
{
struct xsc_set_mtu_mbox_in in;
@@ -1677,6 +1698,7 @@ static const struct net_device_ops xsc_netdev_ops = {
.ndo_stop = xsc_eth_close,
.ndo_start_xmit = xsc_eth_xmit_start,
.ndo_get_stats64 = xsc_eth_get_stats,
+ .ndo_set_mac_address = xsc_eth_set_mac,
};
static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter)
diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c b/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c
index 8200f6c91..f044ac009 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c
@@ -6,6 +6,8 @@
#include "common/xsc_core.h"
#include "common/xsc_driver.h"
+#define LAG_ID_INVALID U16_MAX
+
u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport)
{
struct xsc_query_vport_state_in in;
@@ -28,3 +30,73 @@ u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport)
return out.state;
}
EXPORT_SYMBOL(xsc_core_query_vport_state);
+
+static int xsc_modify_nic_vport_context(struct xsc_core_device *xdev, void *in,
+ int inlen)
+{
+ struct xsc_modify_nic_vport_context_out out;
+ struct xsc_modify_nic_vport_context_in *tmp;
+ int err;
+
+ memset(&out, 0, sizeof(out));
+ tmp = (struct xsc_modify_nic_vport_context_in *)in;
+ tmp->hdr.opcode = cpu_to_be16(XSC_CMD_OP_MODIFY_NIC_VPORT_CONTEXT);
+
+ err = xsc_cmd_exec(xdev, in, inlen, &out, sizeof(out));
+ if (err || out.hdr.status) {
+ xsc_core_err(xdev, "fail to modify nic vport err=%d status=%d\n",
+ err, out.hdr.status);
+ }
+ return err;
+}
+
+static int __xsc_modify_nic_vport_mac_address(struct xsc_core_device *xdev,
+ u16 vport, u8 *addr, int force_other, bool perm_mac)
+{
+ struct xsc_modify_nic_vport_context_in *in;
+ int err;
+ int in_sz;
+ u8 *mac_addr;
+ u16 caps = 0;
+ u16 caps_mask = 0;
+ u16 lag_id = LAG_ID_INVALID;
+
+ in_sz = sizeof(struct xsc_modify_nic_vport_context_in) + 2;
+
+ in = kzalloc(in_sz, GFP_KERNEL);
+ if (!in)
+ return -ENOMEM;
+
+ in->lag_id = cpu_to_be16(lag_id);
+
+ if (perm_mac) {
+ in->field_select.permanent_address = 1;
+ mac_addr = in->nic_vport_ctx.permanent_address;
+ } else {
+ in->field_select.current_address = 1;
+ mac_addr = in->nic_vport_ctx.current_address;
+ }
+
+ caps_mask |= BIT(XSC_TBM_CAP_PP_BYPASS);
+ in->caps = cpu_to_be16(caps);
+ in->caps_mask = cpu_to_be16(caps_mask);
+
+ ether_addr_copy(mac_addr, addr);
+
+ in->field_select.addresses_list = 1;
+ in->nic_vport_ctx.vlan_allowed = 0;
+
+ err = xsc_modify_nic_vport_context(xdev, in, in_sz);
+ if (err)
+ xsc_core_err(xdev, "modify nic vport context failed\n");
+
+ kfree(in);
+ return err;
+}
+
+int xsc_core_modify_nic_vport_mac_address(struct xsc_core_device *xdev,
+ u16 vport, u8 *addr, bool perm_mac)
+{
+ return __xsc_modify_nic_vport_mac_address(xdev, vport, addr, 0, perm_mac);
+}
+EXPORT_SYMBOL(xsc_core_modify_nic_vport_mac_address);
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 16/16] net-next/yunsilicon: Add change mtu
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (14 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 15/16] net-next/yunsilicon: Add ndo_set_mac_address Xin Tian
@ 2024-12-18 10:50 ` Xin Tian
2024-12-18 18:31 ` Andrew Lunn
2024-12-19 0:35 ` [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Jakub Kicinski
16 siblings, 1 reply; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:50 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Add ndo_change_mtu
Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Signed-off-by: Xin Tian <tianx@yunsilicon.com>
---
.../net/ethernet/yunsilicon/xsc/net/main.c | 185 ++++++++++++++++++
.../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 2 +
2 files changed, 187 insertions(+)
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
index 6df7ed3bb..65d17d311 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
@@ -1668,6 +1668,134 @@ static int xsc_eth_set_mac(struct net_device *netdev, void *addr)
return 0;
}
+static void xsc_eth_rss_params_change(struct xsc_adapter *adapter, u32 change, void *modify)
+{
+ struct xsc_core_device *xdev = adapter->xdev;
+ struct xsc_rss_params *rss = &adapter->rss_param;
+ struct xsc_eth_params *params = &adapter->nic_param;
+ struct xsc_cmd_modify_nic_hca_mbox_in *in =
+ (struct xsc_cmd_modify_nic_hca_mbox_in *)modify;
+ u32 hash_field = 0;
+ int key_len;
+ u8 rss_caps_mask = 0;
+
+ if (xsc_get_user_mode(xdev))
+ return;
+
+ if (change & BIT(XSC_RSS_RXQ_DROP)) {
+ in->rss.rqn_base = cpu_to_be16(adapter->channels.rqn_base -
+ xdev->caps.raweth_rss_qp_id_base);
+ in->rss.rqn_num = 0;
+ rss_caps_mask |= BIT(XSC_RSS_RXQ_DROP);
+ goto rss_caps;
+ }
+
+ if (change & BIT(XSC_RSS_RXQ_UPDATE)) {
+ in->rss.rqn_base = cpu_to_be16(adapter->channels.rqn_base -
+ xdev->caps.raweth_rss_qp_id_base);
+ in->rss.rqn_num = cpu_to_be16(params->num_channels);
+ rss_caps_mask |= BIT(XSC_RSS_RXQ_UPDATE);
+ }
+
+ if (change & BIT(XSC_RSS_HASH_KEY_UPDATE)) {
+ key_len = min(sizeof(in->rss.hash_key), sizeof(rss->toeplitz_hash_key));
+ memcpy(&in->rss.hash_key, rss->toeplitz_hash_key, key_len);
+ rss_caps_mask |= BIT(XSC_RSS_HASH_KEY_UPDATE);
+ }
+
+ if (change & BIT(XSC_RSS_HASH_TEMP_UPDATE)) {
+ hash_field = rss->rx_hash_fields[XSC_TT_IPV4_TCP] |
+ rss->rx_hash_fields[XSC_TT_IPV6_TCP];
+ in->rss.hash_tmpl = cpu_to_be32(hash_field);
+ rss_caps_mask |= BIT(XSC_RSS_HASH_TEMP_UPDATE);
+ }
+
+ if (change & BIT(XSC_RSS_HASH_FUNC_UPDATE)) {
+ in->rss.hfunc = xsc_hash_func_type(rss->hfunc);
+ rss_caps_mask |= BIT(XSC_RSS_HASH_FUNC_UPDATE);
+ }
+
+rss_caps:
+ if (rss_caps_mask) {
+ in->rss.caps_mask = rss_caps_mask;
+ in->rss.rss_en = 1;
+ in->nic.caps_mask = cpu_to_be16(BIT(XSC_TBM_CAP_RSS));
+ in->nic.caps = in->nic.caps_mask;
+ }
+}
+
+static int xsc_eth_modify_nic_hca(struct xsc_adapter *adapter, u32 flags)
+{
+ struct xsc_core_device *xdev = adapter->xdev;
+ struct xsc_cmd_modify_nic_hca_mbox_in in = {};
+ struct xsc_cmd_modify_nic_hca_mbox_out out = {};
+ int err = 0;
+
+ in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_MODIFY_NIC_HCA);
+
+ xsc_eth_rss_params_change(adapter, flags, &in);
+ if (in.rss.caps_mask) {
+ err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (err || out.hdr.status) {
+ xsc_core_err(xdev, "failed!! err=%d, status=%u\n",
+ err, out.hdr.status);
+ return -ENOEXEC;
+ }
+ }
+
+ return 0;
+}
+
+static int xsc_safe_switch_channels(struct xsc_adapter *adapter,
+ xsc_eth_fp_preactivate preactivate)
+{
+ struct net_device *netdev = adapter->netdev;
+ int carrier_ok;
+ int ret = 0;
+
+ adapter->status = XSCALE_ETH_DRIVER_CLOSE;
+
+ carrier_ok = netif_carrier_ok(netdev);
+ netif_carrier_off(netdev);
+ ret = xsc_eth_modify_nic_hca(adapter, BIT(XSC_RSS_RXQ_DROP));
+ if (ret)
+ goto close_channels;
+
+ xsc_eth_deactivate_priv_channels(adapter);
+ xsc_eth_close_channels(adapter);
+
+ if (preactivate) {
+ ret = preactivate(adapter);
+ if (ret)
+ goto out;
+ }
+
+ ret = xsc_eth_open_channels(adapter);
+ if (ret)
+ goto close_channels;
+
+ xsc_eth_activate_priv_channels(adapter);
+ ret = xsc_eth_modify_nic_hca(adapter, BIT(XSC_RSS_RXQ_UPDATE));
+ if (ret)
+ goto close_channels;
+
+ adapter->status = XSCALE_ETH_DRIVER_OK;
+
+ goto out;
+
+close_channels:
+ xsc_eth_deactivate_priv_channels(adapter);
+ xsc_eth_close_channels(adapter);
+
+out:
+ if (carrier_ok)
+ netif_carrier_on(netdev);
+ xsc_core_dbg(adapter->xdev, "channels=%d, mtu=%d, err=%d\n",
+ adapter->nic_param.num_channels,
+ adapter->nic_param.mtu, ret);
+ return ret;
+}
+
static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz)
{
struct xsc_set_mtu_mbox_in in;
@@ -1693,12 +1821,69 @@ static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_
return ret;
}
+static int xsc_eth_nic_mtu_changed(struct xsc_adapter *priv)
+{
+ u32 new_mtu = priv->nic_param.mtu;
+ int ret;
+
+ ret = xsc_eth_set_hw_mtu(priv->xdev, XSC_SW2HW_MTU(new_mtu),
+ XSC_SW2HW_RX_PKT_LEN(new_mtu));
+
+ return ret;
+}
+
+static int xsc_eth_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct xsc_adapter *adapter = netdev_priv(netdev);
+ int old_mtu = netdev->mtu;
+ int ret = 0;
+ int max_buf_len = 0;
+
+ if (new_mtu > netdev->max_mtu || new_mtu < netdev->min_mtu) {
+ netdev_err(netdev, "%s: Bad MTU (%d), valid range is: [%d..%d]\n",
+ __func__, new_mtu, netdev->min_mtu, netdev->max_mtu);
+ return -EINVAL;
+ }
+
+ if (!xsc_rx_is_linear_skb(new_mtu)) {
+ max_buf_len = adapter->xdev->caps.recv_ds_num * PAGE_SIZE;
+ if (new_mtu > max_buf_len) {
+ netdev_err(netdev, "Bad MTU (%d), max buf len is %d\n",
+ new_mtu, max_buf_len);
+ return -EINVAL;
+ }
+ }
+ mutex_lock(&adapter->status_lock);
+ adapter->nic_param.mtu = new_mtu;
+ if (adapter->status != XSCALE_ETH_DRIVER_OK) {
+ ret = xsc_eth_nic_mtu_changed(adapter);
+ if (ret)
+ adapter->nic_param.mtu = old_mtu;
+ else
+ netdev->mtu = adapter->nic_param.mtu;
+ goto out;
+ }
+
+ ret = xsc_safe_switch_channels(adapter, xsc_eth_nic_mtu_changed);
+ if (ret)
+ goto out;
+
+ netdev->mtu = adapter->nic_param.mtu;
+
+out:
+ mutex_unlock(&adapter->status_lock);
+ xsc_core_info(adapter->xdev, "mtu change from %d to %d, new_mtu=%d, err=%d\n",
+ old_mtu, netdev->mtu, new_mtu, ret);
+ return ret;
+}
+
static const struct net_device_ops xsc_netdev_ops = {
.ndo_open = xsc_eth_open,
.ndo_stop = xsc_eth_close,
.ndo_start_xmit = xsc_eth_xmit_start,
.ndo_get_stats64 = xsc_eth_get_stats,
.ndo_set_mac_address = xsc_eth_set_mac,
+ .ndo_change_mtu = xsc_eth_change_mtu,
};
static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter)
diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
index 45d8a8cbe..3d0eb95af 100644
--- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
+++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
@@ -53,4 +53,6 @@ struct xsc_adapter {
struct xsc_stats *stats;
};
+typedef int (*xsc_eth_fp_preactivate)(struct xsc_adapter *priv);
+
#endif /* XSC_ETH_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver
@ 2024-12-18 10:51 Xin Tian
2024-12-18 10:50 ` [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework Xin Tian
` (16 more replies)
0 siblings, 17 replies; 33+ messages in thread
From: Xin Tian @ 2024-12-18 10:51 UTC (permalink / raw)
To: netdev
Cc: andrew+netdev, kuba, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
The patch series adds the xsc driver, which will support the Yuncilicon
MS/MC/MV series of network devices.
The Yunsilicon MS/MC/MV series network cards provide support for both
Ethernet and RDMA functionalities.
This submission is the first phase, which includes the PF-based Ethernet
transmit and receive functionality. Once this is merged, we will submit
additional patches to implement support for other features, such as SR-IOV,
ethtool support, and a new RDMA driver.
Changes v0->v1:
1. name xsc_core_device as xdev instead of dev
2. modify Signed-off-by tag to Co-developed-by
3. remove some obvious comments
4. remove unnecessary zero-init and NULL-init
5. modify bad-named goto labels
6. reordered variable declarations according to the RCT rule
- Przemek Kitszel comments
7. add MODULE_DESCRIPTION()
- Jeff Johnson comments
8. remove unnecessary dev_info logs
9. replace these magic numbers with #defines in xsc_eth_common.h
10. move code to right place
11. delete unlikely() used in probe
12. remove unnecessary reboot callbacks
- Andrew Lunn comments
Xin Tian (16):
net-next/yunsilicon: Add yunsilicon xsc driver basic framework
net-next/yunsilicon: Enable CMDQ
net-next/yunsilicon: Add hardware setup APIs
net-next/yunsilicon: Add qp and cq management
net-next/yunsilicon: Add eq and alloc
net-next/yunsilicon: Add pci irq
net-next/yunsilicon: Device and interface management
net-next/yunsilicon: Add ethernet interface
net-next/yunsilicon: Init net device
net-next/yunsilicon: Add eth needed qp and cq apis
net-next/yunsilicon: ndo_open and ndo_stop
net-next/yunsilicon: Add ndo_start_xmit
net-next/yunsilicon: Add eth rx
net-next/yunsilicon: add ndo_get_stats64
net-next/yunsilicon: Add ndo_set_mac_address
net-next/yunsilicon: Add change mtu
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/yunsilicon/Kconfig | 26 +
drivers/net/ethernet/yunsilicon/Makefile | 8 +
.../yunsilicon/xsc/common/xsc_auto_hw.h | 94 +
.../ethernet/yunsilicon/xsc/common/xsc_cmd.h | 2513 +++++++++++++++++
.../ethernet/yunsilicon/xsc/common/xsc_cmdq.h | 218 ++
.../ethernet/yunsilicon/xsc/common/xsc_core.h | 647 +++++
.../yunsilicon/xsc/common/xsc_device.h | 77 +
.../yunsilicon/xsc/common/xsc_driver.h | 25 +
.../ethernet/yunsilicon/xsc/common/xsc_pp.h | 38 +
.../net/ethernet/yunsilicon/xsc/net/Kconfig | 16 +
.../net/ethernet/yunsilicon/xsc/net/Makefile | 9 +
.../net/ethernet/yunsilicon/xsc/net/main.c | 2180 ++++++++++++++
.../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 58 +
.../yunsilicon/xsc/net/xsc_eth_common.h | 239 ++
.../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 608 ++++
.../yunsilicon/xsc/net/xsc_eth_stats.c | 42 +
.../yunsilicon/xsc/net/xsc_eth_stats.h | 33 +
.../ethernet/yunsilicon/xsc/net/xsc_eth_tx.c | 310 ++
.../yunsilicon/xsc/net/xsc_eth_txrx.c | 185 ++
.../yunsilicon/xsc/net/xsc_eth_txrx.h | 90 +
.../ethernet/yunsilicon/xsc/net/xsc_eth_wq.c | 109 +
.../ethernet/yunsilicon/xsc/net/xsc_eth_wq.h | 207 ++
.../net/ethernet/yunsilicon/xsc/net/xsc_pph.h | 176 ++
.../ethernet/yunsilicon/xsc/net/xsc_queue.h | 230 ++
.../net/ethernet/yunsilicon/xsc/pci/Kconfig | 16 +
.../net/ethernet/yunsilicon/xsc/pci/Makefile | 10 +
.../net/ethernet/yunsilicon/xsc/pci/alloc.c | 225 ++
.../net/ethernet/yunsilicon/xsc/pci/alloc.h | 15 +
.../net/ethernet/yunsilicon/xsc/pci/cmdq.c | 2000 +++++++++++++
drivers/net/ethernet/yunsilicon/xsc/pci/cq.c | 151 +
drivers/net/ethernet/yunsilicon/xsc/pci/cq.h | 14 +
drivers/net/ethernet/yunsilicon/xsc/pci/eq.c | 345 +++
drivers/net/ethernet/yunsilicon/xsc/pci/eq.h | 46 +
drivers/net/ethernet/yunsilicon/xsc/pci/hw.c | 269 ++
drivers/net/ethernet/yunsilicon/xsc/pci/hw.h | 18 +
.../net/ethernet/yunsilicon/xsc/pci/intf.c | 279 ++
.../net/ethernet/yunsilicon/xsc/pci/intf.h | 22 +
.../net/ethernet/yunsilicon/xsc/pci/main.c | 426 +++
.../net/ethernet/yunsilicon/xsc/pci/pci_irq.c | 427 +++
.../net/ethernet/yunsilicon/xsc/pci/pci_irq.h | 14 +
drivers/net/ethernet/yunsilicon/xsc/pci/qp.c | 189 ++
drivers/net/ethernet/yunsilicon/xsc/pci/qp.h | 15 +
.../net/ethernet/yunsilicon/xsc/pci/vport.c | 102 +
45 files changed, 12723 insertions(+)
create mode 100644 drivers/net/ethernet/yunsilicon/Kconfig
create mode 100644 drivers/net/ethernet/yunsilicon/Makefile
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Makefile
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/main.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cq.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/eq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/eq.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/hw.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/hw.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/intf.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/intf.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/main.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/qp.c
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/qp.h
create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/vport.c
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework
2024-12-18 10:50 ` [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework Xin Tian
@ 2024-12-18 13:58 ` Przemek Kitszel
2024-12-23 3:43 ` tianx
2024-12-18 18:20 ` Andrew Lunn
1 sibling, 1 reply; 33+ messages in thread
From: Przemek Kitszel @ 2024-12-18 13:58 UTC (permalink / raw)
To: Xin Tian
Cc: andrew+netdev, kuba, netdev, pabeni, edumazet, davem,
jeff.johnson, weihg, wanry
On 12/18/24 11:50, Xin Tian wrote:
> Add yunsilicon xsc driver basic framework, including xsc_pci driver
> and xsc_eth driver
>
>
> Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
> Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
Co-devs need to sign-off too, scripts/checkpatch.pl would catch that
(and more)
> Signed-off-by: Xin Tian <tianx@yunsilicon.com>
> ---
> drivers/net/ethernet/Kconfig | 1 +
> drivers/net/ethernet/Makefile | 1 +
> drivers/net/ethernet/yunsilicon/Kconfig | 26 ++
> drivers/net/ethernet/yunsilicon/Makefile | 8 +
> .../ethernet/yunsilicon/xsc/common/xsc_core.h | 132 +++++++++
> .../net/ethernet/yunsilicon/xsc/net/Kconfig | 16 ++
> .../net/ethernet/yunsilicon/xsc/net/Makefile | 9 +
> .../net/ethernet/yunsilicon/xsc/pci/Kconfig | 16 ++
> .../net/ethernet/yunsilicon/xsc/pci/Makefile | 9 +
> .../net/ethernet/yunsilicon/xsc/pci/main.c | 272 ++++++++++++++++++
> 10 files changed, 490 insertions(+)
> create mode 100644 drivers/net/ethernet/yunsilicon/Kconfig
> create mode 100644 drivers/net/ethernet/yunsilicon/Makefile
> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Makefile
> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/main.c
>
> diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
> index 0baac25db..aa6016597 100644
> --- a/drivers/net/ethernet/Kconfig
> +++ b/drivers/net/ethernet/Kconfig
> @@ -82,6 +82,7 @@ source "drivers/net/ethernet/i825xx/Kconfig"
> source "drivers/net/ethernet/ibm/Kconfig"
> source "drivers/net/ethernet/intel/Kconfig"
> source "drivers/net/ethernet/xscale/Kconfig"
> +source "drivers/net/ethernet/yunsilicon/Kconfig"
>
> config JME
> tristate "JMicron(R) PCI-Express Gigabit Ethernet support"
> diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
> index c03203439..c16c34d4b 100644
> --- a/drivers/net/ethernet/Makefile
> +++ b/drivers/net/ethernet/Makefile
> @@ -51,6 +51,7 @@ obj-$(CONFIG_NET_VENDOR_INTEL) += intel/
> obj-$(CONFIG_NET_VENDOR_I825XX) += i825xx/
> obj-$(CONFIG_NET_VENDOR_MICROSOFT) += microsoft/
> obj-$(CONFIG_NET_VENDOR_XSCALE) += xscale/
> +obj-$(CONFIG_NET_VENDOR_YUNSILICON) += yunsilicon/
> obj-$(CONFIG_JME) += jme.o
> obj-$(CONFIG_KORINA) += korina.o
> obj-$(CONFIG_LANTIQ_ETOP) += lantiq_etop.o
> diff --git a/drivers/net/ethernet/yunsilicon/Kconfig b/drivers/net/ethernet/yunsilicon/Kconfig
> new file mode 100644
> index 000000000..c766390b4
> --- /dev/null
> +++ b/drivers/net/ethernet/yunsilicon/Kconfig
> @@ -0,0 +1,26 @@
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> +# All rights reserved.
> +# Yunsilicon driver configuration
> +#
> +
> +config NET_VENDOR_YUNSILICON
> + bool "Yunsilicon devices"
> + default y
> + depends on PCI || NET
Would it work for you to have only one of the above enabled?
I didn't noticed your response to the same question on your v0
(BTW, versioning starts at v0, remember to add also links to previous
versions (not needed for your v0, to don't bother you with 16 URLs :))
> + depends on ARM64 || X86_64
> + help
> + If you have a network (Ethernet) device belonging to this class,
> + say Y.
> +
> + Note that the answer to this question doesn't directly affect the
> + kernel: saying N will just cause the configurator to skip all
> + the questions about Yunsilicon cards. If you say Y, you will be asked
> + for your specific card in the following questions.
> +
> +if NET_VENDOR_YUNSILICON
> +
> +source "drivers/net/ethernet/yunsilicon/xsc/net/Kconfig"
> +source "drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig"
> +
> +endif # NET_VENDOR_YUNSILICON
> diff --git a/drivers/net/ethernet/yunsilicon/Makefile b/drivers/net/ethernet/yunsilicon/Makefile
> new file mode 100644
> index 000000000..6fc8259a7
> --- /dev/null
> +++ b/drivers/net/ethernet/yunsilicon/Makefile
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> +# All rights reserved.
> +# Makefile for the Yunsilicon device drivers.
> +#
> +
> +# obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/
> +obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc/pci/
> \ No newline at end of file
> diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
> new file mode 100644
> index 000000000..5ed12760e
> --- /dev/null
> +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
> @@ -0,0 +1,132 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> + * All rights reserved.
> + */
> +
> +#ifndef XSC_CORE_H
> +#define XSC_CORE_H
typically there are two underscores in the header names
> +
> +#include <linux/kernel.h>
> +#include <linux/pci.h>
> +
> +extern unsigned int xsc_log_level;
> +
> +#define XSC_PCI_VENDOR_ID 0x1f67
> +
> +#define XSC_MC_PF_DEV_ID 0x1011
> +#define XSC_MC_VF_DEV_ID 0x1012
> +#define XSC_MC_PF_DEV_ID_DIAMOND 0x1021
> +
> +#define XSC_MF_HOST_PF_DEV_ID 0x1051
> +#define XSC_MF_HOST_VF_DEV_ID 0x1052
> +#define XSC_MF_SOC_PF_DEV_ID 0x1053
> +
> +#define XSC_MS_PF_DEV_ID 0x1111
> +#define XSC_MS_VF_DEV_ID 0x1112
> +
> +#define XSC_MV_HOST_PF_DEV_ID 0x1151
> +#define XSC_MV_HOST_VF_DEV_ID 0x1152
> +#define XSC_MV_SOC_PF_DEV_ID 0x1153
> +
> +enum {
> + XSC_LOG_LEVEL_DBG = 0,
> + XSC_LOG_LEVEL_INFO = 1,
> + XSC_LOG_LEVEL_WARN = 2,
> + XSC_LOG_LEVEL_ERR = 3,
> +};
> +
> +#define xsc_dev_log(condition, level, dev, fmt, ...) \
> +do { \
> + if (condition) \
> + dev_printk(level, dev, dev_fmt(fmt), ##__VA_ARGS__); \
> +} while (0)
> +
> +#define xsc_core_dbg(__dev, format, ...) \
> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_DBG, KERN_DEBUG, \
> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
> +
> +#define xsc_core_dbg_once(__dev, format, ...) \
> + dev_dbg_once(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> + __func__, __LINE__, current->pid, \
> + ##__VA_ARGS__)
> +
> +#define xsc_core_dbg_mask(__dev, mask, format, ...) \
> +do { \
> + if ((mask) & xsc_debug_mask) \
> + xsc_core_dbg(__dev, format, ##__VA_ARGS__); \
> +} while (0)
> +
> +#define xsc_core_err(__dev, format, ...) \
> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_ERR, KERN_ERR, \
> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
> +
> +#define xsc_core_err_rl(__dev, format, ...) \
> + dev_err_ratelimited(&(__dev)->pdev->dev, \
> + "%s:%d:(pid %d): " format, \
> + __func__, __LINE__, current->pid, \
> + ##__VA_ARGS__)
> +
> +#define xsc_core_warn(__dev, format, ...) \
> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_WARN, KERN_WARNING, \
> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
> +
> +#define xsc_core_info(__dev, format, ...) \
> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_INFO, KERN_INFO, \
> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
> +
> +#define xsc_pr_debug(format, ...) \
> +do { \
> + if (xsc_log_level <= XSC_LOG_LEVEL_DBG) \
> + pr_debug(format, ##__VA_ARGS__); \
> +} while (0)
> +
> +#define assert(__dev, expr) \
> +do { \
> + if (!(expr)) { \
> + dev_err(&(__dev)->pdev->dev, \
> + "Assertion failed! %s, %s, %s, line %d\n", \
> + #expr, __FILE__, __func__, __LINE__); \
> + } \
> +} while (0)
as a rule of thumb, don't add functions/macros that you don't use in
given patch
have you seen WARN_ON() family?
> +
> +enum {
> + XSC_MAX_NAME_LEN = 32,
> +};
> +
> +struct xsc_dev_resource {
> + struct mutex alloc_mutex; /* protect buffer alocation according to numa node */
> +};
> +
> +enum xsc_pci_state {
> + XSC_PCI_STATE_DISABLED,
> + XSC_PCI_STATE_ENABLED,
> +};
> +
> +struct xsc_priv {
> + char name[XSC_MAX_NAME_LEN];
> + struct list_head dev_list;
> + struct list_head ctx_list;
> + spinlock_t ctx_lock; /* protect ctx_list */
> + int numa_node;
> +};
> +
> +struct xsc_core_device {
> + struct pci_dev *pdev;
> + struct device *device;
> + struct xsc_priv priv;
> + struct xsc_dev_resource *dev_res;
> +
> + void __iomem *bar;
> + int bar_num;
> +
> + struct mutex pci_state_mutex; /* protect pci_state */
> + enum xsc_pci_state pci_state;
> + struct mutex intf_state_mutex; /* protect intf_state */
> + unsigned long intf_state;
> +};
> +
> +#endif
> diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
> new file mode 100644
> index 000000000..0d9a4ff8a
> --- /dev/null
> +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
> @@ -0,0 +1,16 @@
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> +# All rights reserved.
> +# Yunsilicon driver configuration
> +#
> +
> +config YUNSILICON_XSC_ETH
> + tristate "Yunsilicon XSC ethernet driver"
> + default n
> + depends on YUNSILICON_XSC_PCI
> + help
> + This driver provides ethernet support for
> + Yunsilicon XSC devices.
> +
> + To compile this driver as a module, choose M here. The module
> + will be called xsc_eth.
> diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
> new file mode 100644
> index 000000000..2811433af
> --- /dev/null
> +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
> @@ -0,0 +1,9 @@
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> +# All rights reserved.
> +
> +ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
> +
> +obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o
> +
> +xsc_eth-y := main.o
> \ No newline at end of file
> diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
> new file mode 100644
> index 000000000..2b6d79905
> --- /dev/null
> +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
> @@ -0,0 +1,16 @@
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> +# All rights reserved.
> +# Yunsilicon PCI configuration
> +#
> +
> +config YUNSILICON_XSC_PCI
> + tristate "Yunsilicon XSC PCI driver"
> + default n
> + select PAGE_POOL
> + help
> + This driver is common for Yunsilicon XSC
> + ethernet and RDMA drivers.
> +
> + To compile this driver as a module, choose M here. The module
> + will be called xsc_pci.
> diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
> new file mode 100644
> index 000000000..709270df8
> --- /dev/null
> +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
> @@ -0,0 +1,9 @@
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> +# All rights reserved.
> +
> +ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
> +
> +obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
> +
> +xsc_pci-y := main.o
> diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
> new file mode 100644
> index 000000000..cbe0bfbd1
> --- /dev/null
> +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
> @@ -0,0 +1,272 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> + * All rights reserved.
> + */
> +
> +#include "common/xsc_core.h"
> +
> +unsigned int xsc_log_level = XSC_LOG_LEVEL_WARN;
> +module_param_named(log_level, xsc_log_level, uint, 0644);
> +MODULE_PARM_DESC(log_level,
> + "lowest log level to print: 0=debug, 1=info, 2=warning, 3=error. Default=1");
> +EXPORT_SYMBOL(xsc_log_level);
> +
> +#define XSC_PCI_DRV_DESC "Yunsilicon Xsc PCI driver"
remove the define and just use the string inplace as desription
> +
> +static const struct pci_device_id xsc_pci_id_table[] = {
> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) },
> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID_DIAMOND) },
> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_HOST_PF_DEV_ID) },
> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_SOC_PF_DEV_ID) },
> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MS_PF_DEV_ID) },
> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_HOST_PF_DEV_ID) },
> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_SOC_PF_DEV_ID) },
> + { 0 }
> +};
> +
> +static int set_dma_caps(struct pci_dev *pdev)
> +{
> + int err;
> +
> + err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
> + if (err)
> + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> + else
> + err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64));
> +
> + if (!err)
> + dma_set_max_seg_size(&pdev->dev, SZ_2G);
> +
> + return err;
> +}
> +
> +static int xsc_pci_enable_device(struct xsc_core_device *xdev)
> +{
> + struct pci_dev *pdev = xdev->pdev;
> + int err = 0;
> +
> + mutex_lock(&xdev->pci_state_mutex);
> + if (xdev->pci_state == XSC_PCI_STATE_DISABLED) {
> + err = pci_enable_device(pdev);
> + if (!err)
> + xdev->pci_state = XSC_PCI_STATE_ENABLED;
> + }
> + mutex_unlock(&xdev->pci_state_mutex);
> +
> + return err;
> +}
> +
> +static void xsc_pci_disable_device(struct xsc_core_device *xdev)
> +{
> + struct pci_dev *pdev = xdev->pdev;
> +
> + mutex_lock(&xdev->pci_state_mutex);
> + if (xdev->pci_state == XSC_PCI_STATE_ENABLED) {
> + pci_disable_device(pdev);
> + xdev->pci_state = XSC_PCI_STATE_DISABLED;
> + }
> + mutex_unlock(&xdev->pci_state_mutex);
> +}
> +
> +static int xsc_pci_init(struct xsc_core_device *xdev, const struct pci_device_id *id)
> +{
> + struct pci_dev *pdev = xdev->pdev;
> + void __iomem *bar_base;
> + int bar_num = 0;
> + int err;
> +
> + mutex_init(&xdev->pci_state_mutex);
> + xdev->priv.numa_node = dev_to_node(&pdev->dev);
> +
> + err = xsc_pci_enable_device(xdev);
> + if (err) {
> + xsc_core_err(xdev, "failed to enable PCI device: err=%d\n", err);
> + goto err_ret;
> + }
> +
> + err = pci_request_region(pdev, bar_num, KBUILD_MODNAME);
> + if (err) {
> + xsc_core_err(xdev, "failed to request %s pci_region=%d: err=%d\n",
> + KBUILD_MODNAME, bar_num, err);
> + goto err_disable;
> + }
> +
> + pci_set_master(pdev);
> +
> + err = set_dma_caps(pdev);
> + if (err) {
> + xsc_core_err(xdev, "failed to set DMA capabilities mask: err=%d\n", err);
> + goto err_clr_master;
> + }
> +
> + bar_base = pci_ioremap_bar(pdev, bar_num);
> + if (!bar_base) {
> + xsc_core_err(xdev, "failed to ioremap %s bar%d\n", KBUILD_MODNAME, bar_num);
> + goto err_clr_master;
> + }
> +
> + err = pci_save_state(pdev);
> + if (err) {
> + xsc_core_err(xdev, "pci_save_state failed: err=%d\n", err);
> + goto err_io_unmap;
> + }
> +
> + xdev->bar_num = bar_num;
> + xdev->bar = bar_base;
> +
> + return 0;
> +
> +err_io_unmap:
> + pci_iounmap(pdev, bar_base);
> +err_clr_master:
> + pci_clear_master(pdev);
> + pci_release_region(pdev, bar_num);
> +err_disable:
> + xsc_pci_disable_device(xdev);
> +err_ret:
> + return err;
> +}
> +
> +static void xsc_pci_fini(struct xsc_core_device *xdev)
> +{
> + struct pci_dev *pdev = xdev->pdev;
> +
> + if (xdev->bar)
> + pci_iounmap(pdev, xdev->bar);
> + pci_clear_master(pdev);
> + pci_release_region(pdev, xdev->bar_num);
> + xsc_pci_disable_device(xdev);
> +}
> +
> +static int xsc_priv_init(struct xsc_core_device *xdev)
> +{
> + struct xsc_priv *priv = &xdev->priv;
> +
> + strscpy(priv->name, dev_name(&xdev->pdev->dev), XSC_MAX_NAME_LEN);
> +
> + INIT_LIST_HEAD(&priv->ctx_list);
> + spin_lock_init(&priv->ctx_lock);
> + mutex_init(&xdev->intf_state_mutex);
> +
> + return 0;
> +}
> +
> +static int xsc_dev_res_init(struct xsc_core_device *xdev)
> +{
> + struct xsc_dev_resource *dev_res;
> +
> + dev_res = kvzalloc(sizeof(*dev_res), GFP_KERNEL);
> + if (!dev_res)
> + return -ENOMEM;
> +
> + xdev->dev_res = dev_res;
> + mutex_init(&dev_res->alloc_mutex);
> +
> + return 0;
> +}
> +
> +static void xsc_dev_res_cleanup(struct xsc_core_device *xdev)
> +{
> + kfree(xdev->dev_res);
> +}
> +
> +static int xsc_core_dev_init(struct xsc_core_device *xdev)
> +{
> + int err;
> +
> + xsc_priv_init(xdev);
> +
> + err = xsc_dev_res_init(xdev);
> + if (err) {
> + xsc_core_err(xdev, "xsc dev res init failed %d\n", err);
> + goto out;
return err...
> + }
> +
> + return 0;
> +out:
> + return err;
...so you could remove last two lines
> +}
> +
> +static void xsc_core_dev_cleanup(struct xsc_core_device *xdev)
> +{
> + xsc_dev_res_cleanup(xdev);
> +}
> +
> +static int xsc_pci_probe(struct pci_dev *pci_dev,
> + const struct pci_device_id *id)
> +{
> + struct xsc_core_device *xdev;
> + int err;
> +
> + xdev = kzalloc(sizeof(*xdev), GFP_KERNEL);
> + if (!xdev)
> + return -ENOMEM;
> +
> + xdev->pdev = pci_dev;
> + xdev->device = &pci_dev->dev;
> +
> + pci_set_drvdata(pci_dev, xdev);
> + err = xsc_pci_init(xdev, id);
> + if (err) {
> + xsc_core_err(xdev, "xsc_pci_init failed %d\n", err);
> + goto err_unset_pci_drvdata;
> + }
> +
> + err = xsc_core_dev_init(xdev);
> + if (err) {
> + xsc_core_err(xdev, "xsc_core_dev_init failed %d\n", err);
> + goto err_pci_fini;
> + }
> +
> + return 0;
> +err_pci_fini:
> + xsc_pci_fini(xdev);
> +err_unset_pci_drvdata:
> + pci_set_drvdata(pci_dev, NULL);
> + kfree(xdev);
> +
> + return err;
> +}
> +
> +static void xsc_pci_remove(struct pci_dev *pci_dev)
> +{
> + struct xsc_core_device *xdev = pci_get_drvdata(pci_dev);
> +
> + xsc_core_dev_cleanup(xdev);
> + xsc_pci_fini(xdev);
> + pci_set_drvdata(pci_dev, NULL);
> + kfree(xdev);
> +}
> +
> +static struct pci_driver xsc_pci_driver = {
> + .name = "xsc-pci",
> + .id_table = xsc_pci_id_table,
> + .probe = xsc_pci_probe,
> + .remove = xsc_pci_remove,
> +};
> +
> +static int __init xsc_init(void)
> +{
> + int err;
> +
> + err = pci_register_driver(&xsc_pci_driver);
> + if (err) {
> + pr_err("failed to register pci driver\n");
> + goto out;
ditto plain return
> + }
> + return 0;
> +
> +out:
> + return err;
> +}
> +
> +static void __exit xsc_fini(void)
> +{
> + pci_unregister_driver(&xsc_pci_driver);
> +}
> +
> +module_init(xsc_init);
> +module_exit(xsc_fini);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_DESCRIPTION(XSC_PCI_DRV_DESC);
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 02/16] net-next/yunsilicon: Enable CMDQ
2024-12-18 10:50 ` [PATCH v1 02/16] net-next/yunsilicon: Enable CMDQ Xin Tian
@ 2024-12-18 14:46 ` Przemek Kitszel
2024-12-27 3:52 ` tianx
0 siblings, 1 reply; 33+ messages in thread
From: Przemek Kitszel @ 2024-12-18 14:46 UTC (permalink / raw)
To: Xin Tian
Cc: andrew+netdev, kuba, netdev, pabeni, edumazet, davem,
jeff.johnson, weihg, wanry
On 12/18/24 11:50, Xin Tian wrote:
> Enable cmd queue to support driver-firmware communication.
> Hardware control will be performed through cmdq mostly.
>
>
> Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
> Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
> Signed-off-by: Xin Tian <tianx@yunsilicon.com>
> +
> +#ifndef XSC_CMD_H
> +#define XSC_CMD_H
> +
> +#define CMDQ_VERSION 0x32
> +
> +#define MAX_MBOX_OUT_LEN 2048
> +
> +#define QOS_PRIO_MAX 7
> +#define QOS_DSCP_MAX 63
weird formatting
> +#define MAC_PORT_DSCP_SHIFT 6
> +#define QOS_PCP_MAX 7
> +#define DSCP_PCP_UNSET 255
> +#define MAC_PORT_PCP_SHIFT 3
> +#define XSC_MAX_MAC_NUM 8
> +#define XSC_BOARD_SN_LEN 32
> +#define MAX_PKT_LEN 9800
you don't want to collide with future defines in core kernel
> +#define XSC_RTT_CFG_QPN_MAX 32
> +
> +#define XSC_PCIE_LAT_CFG_INTERVAL_MAX 8
> +#define XSC_PCIE_LAT_CFG_HISTOGRAM_MAX 9
> +#define XSC_PCIE_LAT_EN_DISABLE 0
> +#define XSC_PCIE_LAT_EN_ENABLE 1
> +#define XSC_PCIE_LAT_PERIOD_MIN 1
> +#define XSC_PCIE_LAT_PERIOD_MAX 20
> +#define DPU_PORT_WGHT_CFG_MAX 1
> +
> +enum {
> + XSC_CMD_STAT_OK = 0x0,
> + XSC_CMD_STAT_INT_ERR = 0x1,
> + XSC_CMD_STAT_BAD_OP_ERR = 0x2,
> + XSC_CMD_STAT_BAD_PARAM_ERR = 0x3,
> + XSC_CMD_STAT_BAD_SYS_STATE_ERR = 0x4,
> + XSC_CMD_STAT_BAD_RES_ERR = 0x5,
> + XSC_CMD_STAT_RES_BUSY = 0x6,
> + XSC_CMD_STAT_LIM_ERR = 0x8,
> + XSC_CMD_STAT_BAD_RES_STATE_ERR = 0x9,
> + XSC_CMD_STAT_IX_ERR = 0xa,
> + XSC_CMD_STAT_NO_RES_ERR = 0xf,
> + XSC_CMD_STAT_BAD_INP_LEN_ERR = 0x50,
> + XSC_CMD_STAT_BAD_OUTP_LEN_ERR = 0x51,
> + XSC_CMD_STAT_BAD_QP_STATE_ERR = 0x10,
I would keep the list numerically sorted
> + XSC_CMD_STAT_BAD_PKT_ERR = 0x30,
> + XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR = 0x40,
> +};
> +
> +enum {
> + DPU_PORT_WGHT_TARGET_HOST,
> + DPU_PORT_WGHT_TARGET_SOC,
> + DPU_PORT_WGHT_TARGET_NUM,
> +};
> +
> +enum {
> + DPU_PRIO_WGHT_TARGET_HOST2SOC,
> + DPU_PRIO_WGHT_TARGET_SOC2HOST,
> + DPU_PRIO_WGHT_TARGET_HOSTSOC2LAG,
> + DPU_PRIO_WGHT_TARGET_NUM,
> +};
> +
> +#define XSC_AP_FEAT_UDP_SPORT_MIN 1024
> +#define XSC_AP_FEAT_UDP_SPORT_MAX 65535
there is really nothing to reuse from code networking?
> +
> +enum {
> + XSC_CMD_OP_QUERY_HCA_CAP = 0x100,
> + XSC_CMD_OP_QUERY_ADAPTER = 0x101,
> + XSC_CMD_OP_INIT_HCA = 0x102,
> + XSC_CMD_OP_TEARDOWN_HCA = 0x103,
> + XSC_CMD_OP_ENABLE_HCA = 0x104,
> + XSC_CMD_OP_DISABLE_HCA = 0x105,
> + XSC_CMD_OP_MODIFY_HCA = 0x106,
> + XSC_CMD_OP_QUERY_PAGES = 0x107,
> + XSC_CMD_OP_MANAGE_PAGES = 0x108,
> + XSC_CMD_OP_SET_HCA_CAP = 0x109,
> + XSC_CMD_OP_QUERY_CMDQ_VERSION = 0x10a,
> + XSC_CMD_OP_QUERY_MSIX_TBL_INFO = 0x10b,
> + XSC_CMD_OP_FUNCTION_RESET = 0x10c,
> + XSC_CMD_OP_DUMMY = 0x10d,
I didn't checked, but please limit the scope of added defines
to cover only what this series uses (IOW: no dead code)
[...]
> +
> +enum xsc_eth_qp_num_sel {
> + XSC_ETH_QP_NUM_8K_SEL = 0,
> + XSC_ETH_QP_NUM_8K_8TC_SEL,
> + XSC_ETH_QP_NUM_SEL_MAX,
no comma after items that you don't expect addition after
> +};
> +
> +enum xsc_eth_vf_num_sel {
> + XSC_ETH_VF_NUM_SEL_8 = 0,
> + XSC_ETH_VF_NUM_SEL_16,
> + XSC_ETH_VF_NUM_SEL_32,
> + XSC_ETH_VF_NUM_SEL_64,
> + XSC_ETH_VF_NUM_SEL_128,
> + XSC_ETH_VF_NUM_SEL_256,
> + XSC_ETH_VF_NUM_SEL_512,
> + XSC_ETH_VF_NUM_SEL_1024,
> + XSC_ETH_VF_NUM_SEL_MAX
> +};
> +
> +enum {
> + LINKSPEED_MODE_UNKNOWN = -1,
> + LINKSPEED_MODE_10G = 10000,
> + LINKSPEED_MODE_25G = 25000,
> + LINKSPEED_MODE_40G = 40000,
> + LINKSPEED_MODE_50G = 50000,
> + LINKSPEED_MODE_100G = 100000,
> + LINKSPEED_MODE_200G = 200000,
> + LINKSPEED_MODE_400G = 400000,
reminder about prefixing your enums by XSC_
would be also good to add your max supported speed into cover letter
(to get some more atteniton :))
> +};
> +
> +enum {
> + MODULE_SPEED_UNKNOWN,
> + MODULE_SPEED_10G,
> + MODULE_SPEED_25G,
> + MODULE_SPEED_40G_R4,
> + MODULE_SPEED_50G_R,
> + MODULE_SPEED_50G_R2,
> + MODULE_SPEED_100G_R2,
> + MODULE_SPEED_100G_R4,
> + MODULE_SPEED_200G_R4,
> + MODULE_SPEED_200G_R8,
> + MODULE_SPEED_400G_R8,
> +};
> +
> +enum xsc_dma_direct {
> + DMA_DIR_TO_MAC,
> + DMA_DIR_READ,
> + DMA_DIR_WRITE,
> + DMA_DIR_LOOPBACK,
> + DMA_DIR_MAX,
> +};
> +
> +/* hw feature bitmap, 32bit */
> +enum xsc_hw_feature_flag {
> + XSC_HW_RDMA_SUPPORT = 0x1,
= BIT(0)
> + XSC_HW_PFC_PRIO_STATISTIC_SUPPORT = 0x2,
= BIT(1), etc
> + XSC_HW_THIRD_FEATURE = 0x4,
> + XSC_HW_PFC_STALL_STATS_SUPPORT = 0x8,
> + XSC_HW_RDMA_CM_SUPPORT = 0x20,
> +
> + XSC_HW_LAST_FEATURE = 0x80000000,
> +};
> +
> +enum xsc_lldp_dcbx_sub_cmd {
> + XSC_OS_HANDLE_LLDP_STATUS = 0x1,
> + XSC_DCBX_STATUS
> +};
> +
> +struct xsc_inbox_hdr {
> + __be16 opcode;
> + u8 rsvd[4];
> + __be16 ver;
this is command version? (vs API as a whole, or HW ver, etc)
> +};
> +
> +struct xsc_outbox_hdr {
> + u8 status;
> + u8 rsvd[5];
> + __be16 ver;
> +};
> +
> +struct xsc_alloc_ia_lock_mbox_in {
> + struct xsc_inbox_hdr hdr;
> + u8 lock_num;
> + u8 rsvd[7];
> +};
> +
> +#define XSC_RES_NUM_IAE_GRP 16
> +
> +struct xsc_alloc_ia_lock_mbox_out {
> + struct xsc_outbox_hdr hdr;
> + u8 lock_idx[XSC_RES_NUM_IAE_GRP];
> +};
> +
> +struct xsc_release_ia_lock_mbox_in {
> + struct xsc_inbox_hdr hdr;
> + u8 lock_idx[XSC_RES_NUM_IAE_GRP];
> +};
> +
> +struct xsc_release_ia_lock_mbox_out {
> + struct xsc_outbox_hdr hdr;
> + u8 rsvd[8];
> +};
> +
> +struct xsc_pci_driver_init_params_in {
> + struct xsc_inbox_hdr hdr;
> + __be32 s_wqe_mode;
> + __be32 r_wqe_mode;
> + __be32 local_timeout_retrans;
> + u8 mac_lossless_prio[XSC_MAX_MAC_NUM];
please look up your formatting (through the series)
> + __be32 group_mod;
> +};
> +
> +struct xsc_pci_driver_init_params_out {
> + struct xsc_outbox_hdr hdr;
> + u8 rsvd[8];
> +};
> +
> +/*CQ mbox*/
> +struct xsc_cq_context {
> + __be16 eqn;
> + __be16 pa_num;
> + __be16 glb_func_id;
> + u8 log_cq_sz;
> + u8 cq_type;
> +};
> +
> +struct xsc_create_cq_mbox_in {
> + struct xsc_inbox_hdr hdr;
> + struct xsc_cq_context ctx;
> + __be64 pas[];
would be great to explain your shortcuts: eqn, pas, and all other non
obvious ones
[...]
> +/*PD mbox*/
> +struct xsc_alloc_pd_request {
> + u8 rsvd[8];
> +};
really? what this struct is for?
> +
> +struct xsc_alloc_pd_mbox_in {
> + struct xsc_inbox_hdr hdr;
> + struct xsc_alloc_pd_request req;
this is the only use of the above
[...]
> +/* vport mbox */
> +struct xsc_nic_vport_context {
> + __be32 min_wqe_inline_mode:3;
> + __be32 disable_mc_local_lb:1;
> + __be32 disable_uc_local_lb:1;
> + __be32 roce_en:1;
> +
> + __be32 arm_change_event:1;
> + __be32 event_on_mtu:1;
> + __be32 event_on_promisc_change:1;
> + __be32 event_on_vlan_change:1;
> + __be32 event_on_mc_address_change:1;
> + __be32 event_on_uc_address_change:1;
> + __be32 affiliation_criteria:4;
I guess you will have hard time reading those bitfields out
look at FIELD_GET() and similar
> + __be32 affiliated_vhca_id;
> +
> + __be16 mtu;
> +
> + __be64 system_image_guid;
> + __be64 port_guid;
> + __be64 node_guid;
> +
> + __be32 qkey_violation_counter;
> +
> + __be16 spoofchk:1;
> + __be16 trust:1;
> + __be16 promisc:1;
> + __be16 allmcast:1;
> + __be16 vlan_allowed:1;
> + __be16 allowed_list_type:3;
> + __be16 allowed_list_size:10;
> +
> + __be16 vlan_proto;
> + __be16 vlan;
> + u8 qos;
> + u8 permanent_address[6];
> + u8 current_address[6];
> + u8 current_uc_mac_address[0][2];
> +};
> +
> +enum {
> + XSC_HCA_VPORT_SEL_PORT_GUID = 1 << 0,
> + XSC_HCA_VPORT_SEL_NODE_GUID = 1 << 1,
> + XSC_HCA_VPORT_SEL_STATE_POLICY = 1 << 2,
BIT(0), BIT(1), BIT(2)
[...]
> +
> +struct xsc_array128 {
> + u8 array128[16];
> +};
both struct name and member name are wrong, this is an
u128 type basically
> +
> +struct xsc_query_hca_vport_gid_out {
> + struct xsc_outbox_hdr hdr;
> + u16 gids_num;
> + struct xsc_array128 gid[];
__counted_by(gids_num)
> +};
> +
> +struct xsc_query_hca_vport_gid_in {
> + struct xsc_inbox_hdr hdr;
> + u32 other_vport:1;
most other things you have in Big Endian, and now CPU endiannes,
it's intentional?
> + u32 port_num:4;
> + u32 vport_number:16;
> + u32 rsvd0:11;
> + u16 gid_index;
> +};
> +
> +struct xsc_pkey {
> + u16 pkey;
> +};
> +
> +struct xsc_query_hca_vport_pkey_out {
> + struct xsc_outbox_hdr hdr;
> + struct xsc_pkey pkey[];
> +};
> +
> +struct xsc_query_hca_vport_pkey_in {
> + struct xsc_inbox_hdr hdr;
> + u32 other_vport:1;
> + u32 port_num:4;
> + u32 vport_number:16;
> + u32 rsvd0:11;
> + u16 pkey_index;
> +};
> +
> +struct xsc_query_vport_state_out {
> + struct xsc_outbox_hdr hdr;
> + u8 admin_state:4;
> + u8 state:4;
> +};
> +
> +struct xsc_query_vport_state_in {
> + struct xsc_inbox_hdr hdr;
> + u32 other_vport:1;
> + u32 vport_number:16;
> + u32 rsvd0:15;
> +};
> +
> +struct xsc_modify_vport_state_out {
> + struct xsc_outbox_hdr hdr;
> +};
> +
> +struct xsc_modify_vport_state_in {
> + struct xsc_inbox_hdr hdr;
> + u32 other_vport:1;
> + u32 vport_number:16;
> + u32 rsvd0:15;
> + u8 admin_state:4;
> + u8 rsvd1:4;
> +};
> +
> +struct xsc_traffic_counter {
> + u64 packets;
> + u64 bytes;
> +};
> +
> +struct xsc_query_vport_counter_out {
> + struct xsc_outbox_hdr hdr;
> + struct xsc_traffic_counter received_errors;
> + struct xsc_traffic_counter transmit_errors;
> + struct xsc_traffic_counter received_ib_unicast;
> + struct xsc_traffic_counter transmitted_ib_unicast;
> + struct xsc_traffic_counter received_ib_multicast;
> + struct xsc_traffic_counter transmitted_ib_multicast;
> + struct xsc_traffic_counter received_eth_broadcast;
> + struct xsc_traffic_counter transmitted_eth_broadcast;
> + struct xsc_traffic_counter received_eth_unicast;
> + struct xsc_traffic_counter transmitted_eth_unicast;
> + struct xsc_traffic_counter received_eth_multicast;
> + struct xsc_traffic_counter transmitted_eth_multicast;
> +};
> +
not related to "getting counters from the vport", but please make
sure to be familiar with struct rtnl_link_stats64 for actual reporting
needs
> +#define ETH_ALEN 6
please remove
[...]
> +
> +struct xsc_event_linkstatus_resp {
> + u8 linkstatus; /*0:down, 1:up*/
> +};
> +
> +struct xsc_event_linkinfo {
> + u8 linkstatus; /*0:down, 1:up*/
> + u8 port;
> + u8 duplex;
> + u8 autoneg;
> + u32 linkspeed;
> + u64 supported;
> + u64 advertising;
> + u64 supported_fec; /* reserved, not support currently */
> + u64 advertised_fec; /* reserved, not support currently */
> + u64 supported_speed[2];
> + u64 advertising_speed[2];
> +};
heads up: please make our link speed, pause, fec code commits
clearly marked, so reviewers interested in that topic would recognize
and in general: please elaborate a bit in the commit message what it
does in terms of standard networking/kernel (not limiting to your
3-4 letter acronyms used for name of channel between your driver and HW)
this patch is so long that I will end here
[...]
> +
> +struct hwc_set_t {
add prefix xsc_ in the name
don't name structs _t, as that would be confused for typedefed one
[...]
> +
> + u8 dword_0[0x20];
> + u8 dword_1[0x20];
> + u8 dword_2[0x20];
> + u8 dword_3[0x20];
> + u8 dword_4[0x20];
> + u8 dword_5[0x20];
> + u8 dword_6[0x20];
> + u8 dword_7[0x20];
> + u8 dword_8[0x20];
> + u8 dword_9[0x20];
> + u8 dword_10[0x20];
> + u8 dword_11[0x20];
sizeof(dword) is neither sizeof(u8) nor 0x20
(bad name)
> +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h
> @@ -0,0 +1,218 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> + * All rights reserved.
> + */
> +
> +#ifndef XSC_CMDQ_H
> +#define XSC_CMDQ_H
> +
> +#include "common/xsc_cmd.h"
> +
> +enum {
> + /* one minute for the sake of bringup. Generally, commands must always
outdated comment
> + * complete and we may need to increase this timeout value
> + */
> + XSC_CMD_TIMEOUT_MSEC = 10 * 1000,
> + XSC_CMD_WQ_MAX_NAME = 32,
take a look at the abundant kernel provided work queues, not need to
spawn your own most of the time
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework
2024-12-18 10:50 ` [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework Xin Tian
2024-12-18 13:58 ` Przemek Kitszel
@ 2024-12-18 18:20 ` Andrew Lunn
2024-12-23 3:20 ` tianx
1 sibling, 1 reply; 33+ messages in thread
From: Andrew Lunn @ 2024-12-18 18:20 UTC (permalink / raw)
To: Xin Tian
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
> +enum {
> + XSC_LOG_LEVEL_DBG = 0,
> + XSC_LOG_LEVEL_INFO = 1,
> + XSC_LOG_LEVEL_WARN = 2,
> + XSC_LOG_LEVEL_ERR = 3,
> +};
> +
> +#define xsc_dev_log(condition, level, dev, fmt, ...) \
> +do { \
> + if (condition) \
> + dev_printk(level, dev, dev_fmt(fmt), ##__VA_ARGS__); \
> +} while (0)
> +
> +#define xsc_core_dbg(__dev, format, ...) \
> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_DBG, KERN_DEBUG, \
> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
> +
> +#define xsc_core_dbg_once(__dev, format, ...) \
> + dev_dbg_once(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> + __func__, __LINE__, current->pid, \
> + ##__VA_ARGS__)
> +
> +#define xsc_core_dbg_mask(__dev, mask, format, ...) \
> +do { \
> + if ((mask) & xsc_debug_mask) \
> + xsc_core_dbg(__dev, format, ##__VA_ARGS__); \
> +} while (0)
You where asked to throw all these away and just use the existing
methods.
If you disagree with a comment, please reply and ask for more details,
understand the reason behind the comment, or maybe try to justify your
solution over what already exists.
Maybe look at the ethtool .get_msglevel & .set_msglevel if you are not
already using them.
> +unsigned int xsc_log_level = XSC_LOG_LEVEL_WARN;
> +module_param_named(log_level, xsc_log_level, uint, 0644);
> +MODULE_PARM_DESC(log_level,
> + "lowest log level to print: 0=debug, 1=info, 2=warning, 3=error. Default=1");
Module parameters are not liked. You will however find quite a few
drivers with something like:
MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
which is used to set the initial msglevel. That will probably be
accepted.
> +EXPORT_SYMBOL(xsc_log_level);
I've not looked at your overall structure yet, but why export this?
Are there multiple modules involved?
Andrew
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 09/16] net-next/yunsilicon: Init net device
2024-12-18 10:50 ` [PATCH v1 09/16] net-next/yunsilicon: Init net device Xin Tian
@ 2024-12-18 18:28 ` Andrew Lunn
2024-12-20 10:43 ` tianx
0 siblings, 1 reply; 33+ messages in thread
From: Andrew Lunn @ 2024-12-18 18:28 UTC (permalink / raw)
To: Xin Tian
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
> +static int xsc_attach_netdev(struct xsc_adapter *adapter)
> +{
> + int err = -1;
> +
> + err = xsc_eth_nic_enable(adapter);
> + if (err)
> + return err;
> +
> + xsc_core_info(adapter->xdev, "%s ok\n", __func__);
...
> +static int xsc_eth_attach(struct xsc_core_device *xdev, struct xsc_adapter *adapter)
> +{
> + int err = -1;
> +
> + if (netif_device_present(adapter->netdev))
> + return 0;
> +
> + err = xsc_attach_netdev(adapter);
> + if (err)
> + return err;
> +
> + xsc_core_info(adapter->xdev, "%s ok\n", __func__);
Don't spam the log like this. _dbg() or nothing.
> + err = xsc_eth_nic_init(adapter, rep_priv, num_chl, num_tc);
> + if (err) {
> + xsc_core_warn(xdev, "xsc_nic_init failed, num_ch=%d, num_tc=%d, err=%d\n",
> + num_chl, num_tc, err);
> + goto err_free_netdev;
> + }
> +
> + err = xsc_eth_attach(xdev, adapter);
> + if (err) {
> + xsc_core_warn(xdev, "xsc_eth_attach failed, err=%d\n", err);
> + goto err_cleanup_netdev;
> + }
> +
> err = register_netdev(netdev);
> if (err) {
> xsc_core_warn(xdev, "register_netdev failed, err=%d\n", err);
> - goto err_free_netdev;
> + goto err_detach;
> }
>
> xdev->netdev = (void *)netdev;
Before register_netdev() returns, the device is live and sending
packets, especially if you are using NFS root. What will happen if
xdev->netdev is NULL with those first few packets?
And why the void * cast?
> +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
> +/*
> + * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
> + * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved.
> + *
> + * This software is available to you under a choice of one of two
> + * licenses. You may choose to be licensed under the terms of the GNU
> + * General Public License (GPL) Version 2, available from the file
> + * COPYING in the main directory of this source tree, or the
> + * OpenIB.org BSD license below:
> + *
> + * Redistribution and use in source and binary forms, with or
> + * without modification, are permitted provided that the following
> + * conditions are met:
> + *
> + * - Redistributions of source code must retain the above
> + * copyright notice, this list of conditions and the following
> + * disclaimer.
> + *
> + * - Redistributions in binary form must reproduce the above
> + * copyright notice, this list of conditions and the following
> + * disclaimer in the documentation and/or other materials
> + * provided with the distribution.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> + * SOFTWARE.
> + */
The /* SPDX-License-Identifier: line replaces all such license
boilerplate. Please delete this.
Andrew
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 16/16] net-next/yunsilicon: Add change mtu
2024-12-18 10:50 ` [PATCH v1 16/16] net-next/yunsilicon: Add change mtu Xin Tian
@ 2024-12-18 18:31 ` Andrew Lunn
2024-12-20 7:07 ` tianx
0 siblings, 1 reply; 33+ messages in thread
From: Andrew Lunn @ 2024-12-18 18:31 UTC (permalink / raw)
To: Xin Tian
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
> +static int xsc_eth_change_mtu(struct net_device *netdev, int new_mtu)
> +{
> + struct xsc_adapter *adapter = netdev_priv(netdev);
> + int old_mtu = netdev->mtu;
> + int ret = 0;
> + int max_buf_len = 0;
> +
> + if (new_mtu > netdev->max_mtu || new_mtu < netdev->min_mtu) {
> + netdev_err(netdev, "%s: Bad MTU (%d), valid range is: [%d..%d]\n",
> + __func__, new_mtu, netdev->min_mtu, netdev->max_mtu);
> + return -EINVAL;
> + }
What checking does the core do for you, now that you have set max_mtu
and min_mtu?
Andrew
---
pw-bot: cr
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 13/16] net-next/yunsilicon: Add eth rx
2024-12-18 10:50 ` [PATCH v1 13/16] net-next/yunsilicon: Add eth rx Xin Tian
@ 2024-12-18 19:47 ` Andrew Lunn
2024-12-20 7:31 ` tianx
0 siblings, 1 reply; 33+ messages in thread
From: Andrew Lunn @ 2024-12-18 19:47 UTC (permalink / raw)
To: Xin Tian
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
> +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
> @@ -33,10 +33,572 @@
> * SOFTWARE.
> */
>
> +#include <linux/net_tstamp.h>
> +#include "xsc_eth.h"
> #include "xsc_eth_txrx.h"
> +#include "xsc_eth_common.h"
> +#include <linux/device.h>
> +#include "common/xsc_pp.h"
> +#include "xsc_pph.h"
> +
> +#define PAGE_REF_ELEV (U16_MAX)
> +/* Upper bound on number of packets that share a single page */
> +#define PAGE_REF_THRSD (PAGE_SIZE / 64)
> +
> +static inline void xsc_rq_notify_hw(struct xsc_rq *rq)
> +{
Please don't use inline functions in .c files. Let the compiler
decide.
Andrew
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
` (15 preceding siblings ...)
2024-12-18 10:50 ` [PATCH v1 16/16] net-next/yunsilicon: Add change mtu Xin Tian
@ 2024-12-19 0:35 ` Jakub Kicinski
2024-12-20 6:40 ` tianx
16 siblings, 1 reply; 33+ messages in thread
From: Jakub Kicinski @ 2024-12-19 0:35 UTC (permalink / raw)
To: Xin Tian
Cc: netdev, andrew+netdev, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
On Wed, 18 Dec 2024 18:51:01 +0800 Xin Tian wrote:
> 45 files changed, 12723 insertions(+)
This is a lot of code and above our preferred patch limit.
Please exclude the last 3 patches from v2. They don't seem
strictly necessary for the initial driver.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver
2024-12-19 0:35 ` [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Jakub Kicinski
@ 2024-12-20 6:40 ` tianx
2024-12-20 9:21 ` Andrew Lunn
0 siblings, 1 reply; 33+ messages in thread
From: tianx @ 2024-12-20 6:40 UTC (permalink / raw)
To: Jakub Kicinski
Cc: netdev, andrew+netdev, pabeni, edumazet, davem, jeff.johnson,
przemyslaw.kitszel, weihg, wanry
Thank you for the feedback. I will remove the last two patches as
requested. However, I would like to keep the third-to-last
patch(ndo_get_stats64), as it ensures "|ifconfig|/|ip||a|" display
correct pkt statistics for our interface. I hope that’s acceptable.
On 2024/12/19 8:35, Jakub Kicinski wrote:
> On Wed, 18 Dec 2024 18:51:01 +0800 Xin Tian wrote:
>> 45 files changed, 12723 insertions(+)
> This is a lot of code and above our preferred patch limit.
> Please exclude the last 3 patches from v2. They don't seem
> strictly necessary for the initial driver.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 16/16] net-next/yunsilicon: Add change mtu
2024-12-18 18:31 ` Andrew Lunn
@ 2024-12-20 7:07 ` tianx
0 siblings, 0 replies; 33+ messages in thread
From: tianx @ 2024-12-20 7:07 UTC (permalink / raw)
To: Andrew Lunn
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
You’re right, the check is redundant. However, as per Jakub’s suggestion
to reduce the patch count, this patch will be excluded in the next
version. I'll address it locally and submit in a future update.
Thank you for pointing that out!
On 2024/12/19 2:31, Andrew Lunn wrote:
>> +static int xsc_eth_change_mtu(struct net_device *netdev, int new_mtu)
>> +{
>> + struct xsc_adapter *adapter = netdev_priv(netdev);
>> + int old_mtu = netdev->mtu;
>> + int ret = 0;
>> + int max_buf_len = 0;
>> +
>> + if (new_mtu > netdev->max_mtu || new_mtu < netdev->min_mtu) {
>> + netdev_err(netdev, "%s: Bad MTU (%d), valid range is: [%d..%d]\n",
>> + __func__, new_mtu, netdev->min_mtu, netdev->max_mtu);
>> + return -EINVAL;
>> + }
> What checking does the core do for you, now that you have set max_mtu
> and min_mtu?
>
>
> Andrew
>
> ---
> pw-bot: cr
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 13/16] net-next/yunsilicon: Add eth rx
2024-12-18 19:47 ` Andrew Lunn
@ 2024-12-20 7:31 ` tianx
0 siblings, 0 replies; 33+ messages in thread
From: tianx @ 2024-12-20 7:31 UTC (permalink / raw)
To: Andrew Lunn
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
Thank you for your suggestion. I will address this issue in the next
version and avoid using inline functions in the .c files.
On 2024/12/19 3:47, Andrew Lunn wrote:
>> +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
>> @@ -33,10 +33,572 @@
>> * SOFTWARE.
>> */
>>
>> +#include <linux/net_tstamp.h>
>> +#include "xsc_eth.h"
>> #include "xsc_eth_txrx.h"
>> +#include "xsc_eth_common.h"
>> +#include <linux/device.h>
>> +#include "common/xsc_pp.h"
>> +#include "xsc_pph.h"
>> +
>> +#define PAGE_REF_ELEV (U16_MAX)
>> +/* Upper bound on number of packets that share a single page */
>> +#define PAGE_REF_THRSD (PAGE_SIZE / 64)
>> +
>> +static inline void xsc_rq_notify_hw(struct xsc_rq *rq)
>> +{
> Please don't use inline functions in .c files. Let the compiler
> decide.
>
> Andrew
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver
2024-12-20 6:40 ` tianx
@ 2024-12-20 9:21 ` Andrew Lunn
0 siblings, 0 replies; 33+ messages in thread
From: Andrew Lunn @ 2024-12-20 9:21 UTC (permalink / raw)
To: tianx
Cc: Jakub Kicinski, netdev, andrew+netdev, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
On Fri, Dec 20, 2024 at 02:40:33PM +0800, tianx wrote:
> Thank you for the feedback. I will remove the last two patches as
> requested. However, I would like to keep the third-to-last
> patch(ndo_get_stats64), as it ensures "|ifconfig|/|ip||a|" display
> correct pkt statistics for our interface. I hope that’s acceptable.
Please don't top post.
Andrew
>
> On 2024/12/19 8:35, Jakub Kicinski wrote:
> > On Wed, 18 Dec 2024 18:51:01 +0800 Xin Tian wrote:
> >> 45 files changed, 12723 insertions(+)
> > This is a lot of code and above our preferred patch limit.
> > Please exclude the last 3 patches from v2. They don't seem
> > strictly necessary for the initial driver.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 09/16] net-next/yunsilicon: Init net device
2024-12-18 18:28 ` Andrew Lunn
@ 2024-12-20 10:43 ` tianx
0 siblings, 0 replies; 33+ messages in thread
From: tianx @ 2024-12-20 10:43 UTC (permalink / raw)
To: Andrew Lunn
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
On 2024/12/19 2:28, Andrew Lunn wrote:
>> +static int xsc_attach_netdev(struct xsc_adapter *adapter)
>> +{
>> + int err = -1;
>> +
>> + err = xsc_eth_nic_enable(adapter);
>> + if (err)
>> + return err;
>> +
>> + xsc_core_info(adapter->xdev, "%s ok\n", __func__);
> ...
>
>> +static int xsc_eth_attach(struct xsc_core_device *xdev, struct xsc_adapter *adapter)
>> +{
>> + int err = -1;
>> +
>> + if (netif_device_present(adapter->netdev))
>> + return 0;
>> +
>> + err = xsc_attach_netdev(adapter);
>> + if (err)
>> + return err;
>> +
>> + xsc_core_info(adapter->xdev, "%s ok\n", __func__);
> Don't spam the log like this. _dbg() or nothing.
OK, will fix in v2.
>> + err = xsc_eth_nic_init(adapter, rep_priv, num_chl, num_tc);
>> + if (err) {
>> + xsc_core_warn(xdev, "xsc_nic_init failed, num_ch=%d, num_tc=%d, err=%d\n",
>> + num_chl, num_tc, err);
>> + goto err_free_netdev;
>> + }
>> +
>> + err = xsc_eth_attach(xdev, adapter);
>> + if (err) {
>> + xsc_core_warn(xdev, "xsc_eth_attach failed, err=%d\n", err);
>> + goto err_cleanup_netdev;
>> + }
>> +
>> err = register_netdev(netdev);
>> if (err) {
>> xsc_core_warn(xdev, "register_netdev failed, err=%d\n", err);
>> - goto err_free_netdev;
>> + goto err_detach;
>> }
>>
>> xdev->netdev = (void *)netdev;
> Before register_netdev() returns, the device is live and sending
> packets, especially if you are using NFS root. What will happen if
> xdev->netdev is NULL with those first few packets?
It's OK because xdev->netdev is never used before register_netdev() returns.
>
> And why the void * cast?
unnecessary cast, will fix that
>> +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
>> +/*
>> + * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> + * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved.
>> + *
>> + * This software is available to you under a choice of one of two
>> + * licenses. You may choose to be licensed under the terms of the GNU
>> + * General Public License (GPL) Version 2, available from the file
>> + * COPYING in the main directory of this source tree, or the
>> + * OpenIB.org BSD license below:
>> + *
>> + * Redistribution and use in source and binary forms, with or
>> + * without modification, are permitted provided that the following
>> + * conditions are met:
>> + *
>> + * - Redistributions of source code must retain the above
>> + * copyright notice, this list of conditions and the following
>> + * disclaimer.
>> + *
>> + * - Redistributions in binary form must reproduce the above
>> + * copyright notice, this list of conditions and the following
>> + * disclaimer in the documentation and/or other materials
>> + * provided with the distribution.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
>> + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
>> + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
>> + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
>> + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
>> + * SOFTWARE.
>> + */
> The /* SPDX-License-Identifier: line replaces all such license
> boilerplate. Please delete this.
>
> Andrew
will delete that in v2
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework
2024-12-18 18:20 ` Andrew Lunn
@ 2024-12-23 3:20 ` tianx
2024-12-24 2:27 ` Andrew Lunn
0 siblings, 1 reply; 33+ messages in thread
From: tianx @ 2024-12-23 3:20 UTC (permalink / raw)
To: Andrew Lunn
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
On 2024/12/19 2:20, Andrew Lunn wrote:
>> +enum {
>> + XSC_LOG_LEVEL_DBG = 0,
>> + XSC_LOG_LEVEL_INFO = 1,
>> + XSC_LOG_LEVEL_WARN = 2,
>> + XSC_LOG_LEVEL_ERR = 3,
>> +};
>> +
>> +#define xsc_dev_log(condition, level, dev, fmt, ...) \
>> +do { \
>> + if (condition) \
>> + dev_printk(level, dev, dev_fmt(fmt), ##__VA_ARGS__); \
>> +} while (0)
>> +
>> +#define xsc_core_dbg(__dev, format, ...) \
>> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_DBG, KERN_DEBUG, \
>> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
>> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
>> +
>> +#define xsc_core_dbg_once(__dev, format, ...) \
>> + dev_dbg_once(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
>> + __func__, __LINE__, current->pid, \
>> + ##__VA_ARGS__)
>> +
>> +#define xsc_core_dbg_mask(__dev, mask, format, ...) \
>> +do { \
>> + if ((mask) & xsc_debug_mask) \
>> + xsc_core_dbg(__dev, format, ##__VA_ARGS__); \
>> +} while (0)
> You where asked to throw all these away and just use the existing
> methods.
>
> If you disagree with a comment, please reply and ask for more details,
> understand the reason behind the comment, or maybe try to justify your
> solution over what already exists.
>
> Maybe look at the ethtool .get_msglevel & .set_msglevel if you are not
> already using them.
Apologies for the delayed reply. Thank you for the feedback.
Our driver suite consists of three modules: xsc_pci (which manages
hardware resources and provides common services for the other two
modules), xsc_eth (providing Ethernet functionality), and xsc_ib
(offering RDMA functionality). The patch set we are submitting currently
includes xsc_pci and xsc_eth.
To ensure consistent and fine-grained log control for all modules, we
have wrapped these logging functions for ease of use. The use of these
interfaces is strictly limited to our driver and does not impact other
parts of the kernel. I believe this can be considered a small feature
within our code. I’ve also observed similar implementations in other
drivers, such as in |drivers/net/ethernet/chelsio/common.h| and
|drivers/net/ethernet/adaptec/starfire.c|.
The |get_msglevel| and |set_msglevel| can only be used in the Ethernet
driver, whereas we need to define a shared |log_level| in the PCI driver.
Please let me know the main concerns of the community, and we will be
happy to make any necessary adjustments.
>> +unsigned int xsc_log_level = XSC_LOG_LEVEL_WARN;
>> +module_param_named(log_level, xsc_log_level, uint, 0644);
>> +MODULE_PARM_DESC(log_level,
>> + "lowest log level to print: 0=debug, 1=info, 2=warning, 3=error. Default=1");
> Module parameters are not liked. You will however find quite a few
> drivers with something like:
>
> MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
>
> which is used to set the initial msglevel. That will probably be
> accepted.
got it.
>> +EXPORT_SYMBOL(xsc_log_level);
> I've not looked at your overall structure yet, but why export this?
> Are there multiple modules involved?
The two modules we are currently submitting (xsc_pci and xsc_eth) both
need access to |xsc_log_level|, so it is being exported.
>
> Andrew
Thank you, Andrew. Looking forward to your reply.
Best regards,
Xin Tian
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework
2024-12-18 13:58 ` Przemek Kitszel
@ 2024-12-23 3:43 ` tianx
0 siblings, 0 replies; 33+ messages in thread
From: tianx @ 2024-12-23 3:43 UTC (permalink / raw)
To: Przemek Kitszel
Cc: andrew+netdev, kuba, netdev, pabeni, edumazet, davem,
jeff.johnson, weihg, wanry
On 2024/12/18 21:58, Przemek Kitszel wrote:
> On 12/18/24 11:50, Xin Tian wrote:
>> Add yunsilicon xsc driver basic framework, including xsc_pci driver
>> and xsc_eth driver
>>
>> Co-developed-by: Honggang Wei <weihg@yunsilicon.com>
>> Co-developed-by: Lei Yan <Jacky@yunsilicon.com>
>
> Co-devs need to sign-off too, scripts/checkpatch.pl would catch that
> (and more)
>
got it
>> Signed-off-by: Xin Tian <tianx@yunsilicon.com>
>> ---
>> drivers/net/ethernet/Kconfig | 1 +
>> drivers/net/ethernet/Makefile | 1 +
>> drivers/net/ethernet/yunsilicon/Kconfig | 26 ++
>> drivers/net/ethernet/yunsilicon/Makefile | 8 +
>> .../ethernet/yunsilicon/xsc/common/xsc_core.h | 132 +++++++++
>> .../net/ethernet/yunsilicon/xsc/net/Kconfig | 16 ++
>> .../net/ethernet/yunsilicon/xsc/net/Makefile | 9 +
>> .../net/ethernet/yunsilicon/xsc/pci/Kconfig | 16 ++
>> .../net/ethernet/yunsilicon/xsc/pci/Makefile | 9 +
>> .../net/ethernet/yunsilicon/xsc/pci/main.c | 272 ++++++++++++++++++
>> 10 files changed, 490 insertions(+)
>> create mode 100644 drivers/net/ethernet/yunsilicon/Kconfig
>> create mode 100644 drivers/net/ethernet/yunsilicon/Makefile
>> create mode 100644
>> drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
>> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
>> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Makefile
>> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
>> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
>> create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/main.c
>>
>> diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
>> index 0baac25db..aa6016597 100644
>> --- a/drivers/net/ethernet/Kconfig
>> +++ b/drivers/net/ethernet/Kconfig
>> @@ -82,6 +82,7 @@ source "drivers/net/ethernet/i825xx/Kconfig"
>> source "drivers/net/ethernet/ibm/Kconfig"
>> source "drivers/net/ethernet/intel/Kconfig"
>> source "drivers/net/ethernet/xscale/Kconfig"
>> +source "drivers/net/ethernet/yunsilicon/Kconfig"
>> config JME
>> tristate "JMicron(R) PCI-Express Gigabit Ethernet support"
>> diff --git a/drivers/net/ethernet/Makefile
>> b/drivers/net/ethernet/Makefile
>> index c03203439..c16c34d4b 100644
>> --- a/drivers/net/ethernet/Makefile
>> +++ b/drivers/net/ethernet/Makefile
>> @@ -51,6 +51,7 @@ obj-$(CONFIG_NET_VENDOR_INTEL) += intel/
>> obj-$(CONFIG_NET_VENDOR_I825XX) += i825xx/
>> obj-$(CONFIG_NET_VENDOR_MICROSOFT) += microsoft/
>> obj-$(CONFIG_NET_VENDOR_XSCALE) += xscale/
>> +obj-$(CONFIG_NET_VENDOR_YUNSILICON) += yunsilicon/
>> obj-$(CONFIG_JME) += jme.o
>> obj-$(CONFIG_KORINA) += korina.o
>> obj-$(CONFIG_LANTIQ_ETOP) += lantiq_etop.o
>> diff --git a/drivers/net/ethernet/yunsilicon/Kconfig
>> b/drivers/net/ethernet/yunsilicon/Kconfig
>> new file mode 100644
>> index 000000000..c766390b4
>> --- /dev/null
>> +++ b/drivers/net/ethernet/yunsilicon/Kconfig
>> @@ -0,0 +1,26 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> +# All rights reserved.
>> +# Yunsilicon driver configuration
>> +#
>> +
>> +config NET_VENDOR_YUNSILICON
>> + bool "Yunsilicon devices"
>> + default y
>> + depends on PCI || NET
>
> Would it work for you to have only one of the above enabled?
>
> I didn't noticed your response to the same question on your v0
> (BTW, versioning starts at v0, remember to add also links to previous
> versions (not needed for your v0, to don't bother you with 16 URLs :))
>
Will modify to PCI only, and move NET to xsc_eth Kconfig
>> + depends on ARM64 || X86_64
>> + help
>> + If you have a network (Ethernet) device belonging to this class,
>> + say Y.
>> +
>> + Note that the answer to this question doesn't directly affect the
>> + kernel: saying N will just cause the configurator to skip all
>> + the questions about Yunsilicon cards. If you say Y, you will
>> be asked
>> + for your specific card in the following questions.
>> +
>> +if NET_VENDOR_YUNSILICON
>> +
>> +source "drivers/net/ethernet/yunsilicon/xsc/net/Kconfig"
>> +source "drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig"
>> +
>> +endif # NET_VENDOR_YUNSILICON
>> diff --git a/drivers/net/ethernet/yunsilicon/Makefile
>> b/drivers/net/ethernet/yunsilicon/Makefile
>> new file mode 100644
>> index 000000000..6fc8259a7
>> --- /dev/null
>> +++ b/drivers/net/ethernet/yunsilicon/Makefile
>> @@ -0,0 +1,8 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> +# All rights reserved.
>> +# Makefile for the Yunsilicon device drivers.
>> +#
>> +
>> +# obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/
>> +obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc/pci/
>> \ No newline at end of file
>> diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
>> b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
>> new file mode 100644
>> index 000000000..5ed12760e
>> --- /dev/null
>> +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h
>> @@ -0,0 +1,132 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> + * All rights reserved.
>> + */
>> +
>> +#ifndef XSC_CORE_H
>> +#define XSC_CORE_H
>
> typically there are two underscores in the header names
>
Got it.
>> +
>> +#include <linux/kernel.h>
>> +#include <linux/pci.h>
>> +
>> +extern unsigned int xsc_log_level;
>> +
>> +#define XSC_PCI_VENDOR_ID 0x1f67
>> +
>> +#define XSC_MC_PF_DEV_ID 0x1011
>> +#define XSC_MC_VF_DEV_ID 0x1012
>> +#define XSC_MC_PF_DEV_ID_DIAMOND 0x1021
>> +
>> +#define XSC_MF_HOST_PF_DEV_ID 0x1051
>> +#define XSC_MF_HOST_VF_DEV_ID 0x1052
>> +#define XSC_MF_SOC_PF_DEV_ID 0x1053
>> +
>> +#define XSC_MS_PF_DEV_ID 0x1111
>> +#define XSC_MS_VF_DEV_ID 0x1112
>> +
>> +#define XSC_MV_HOST_PF_DEV_ID 0x1151
>> +#define XSC_MV_HOST_VF_DEV_ID 0x1152
>> +#define XSC_MV_SOC_PF_DEV_ID 0x1153
>> +
>> +enum {
>> + XSC_LOG_LEVEL_DBG = 0,
>> + XSC_LOG_LEVEL_INFO = 1,
>> + XSC_LOG_LEVEL_WARN = 2,
>> + XSC_LOG_LEVEL_ERR = 3,
>> +};
>> +
>> +#define xsc_dev_log(condition, level, dev, fmt, ...) \
>> +do { \
>> + if (condition) \
>> + dev_printk(level, dev, dev_fmt(fmt), ##__VA_ARGS__); \
>> +} while (0)
>> +
>> +#define xsc_core_dbg(__dev, format, ...) \
>> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_DBG, KERN_DEBUG, \
>> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
>> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
>> +
>> +#define xsc_core_dbg_once(__dev, format, ...) \
>> + dev_dbg_once(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
>> + __func__, __LINE__, current->pid, \
>> + ##__VA_ARGS__)
>> +
>> +#define xsc_core_dbg_mask(__dev, mask, format, ...) \
>> +do { \
>> + if ((mask) & xsc_debug_mask) \
>> + xsc_core_dbg(__dev, format, ##__VA_ARGS__); \
>> +} while (0)
>> +
>> +#define xsc_core_err(__dev, format, ...) \
>> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_ERR, KERN_ERR, \
>> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
>> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
>> +
>> +#define xsc_core_err_rl(__dev, format, ...) \
>> + dev_err_ratelimited(&(__dev)->pdev->dev, \
>> + "%s:%d:(pid %d): " format, \
>> + __func__, __LINE__, current->pid, \
>> + ##__VA_ARGS__)
>> +
>> +#define xsc_core_warn(__dev, format, ...) \
>> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_WARN, KERN_WARNING, \
>> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
>> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
>> +
>> +#define xsc_core_info(__dev, format, ...) \
>> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_INFO, KERN_INFO, \
>> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
>> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
>> +
>> +#define xsc_pr_debug(format, ...) \
>> +do { \
>> + if (xsc_log_level <= XSC_LOG_LEVEL_DBG) \
>> + pr_debug(format, ##__VA_ARGS__); \
>> +} while (0)
>> +
>> +#define assert(__dev, expr) \
>> +do { \
>> + if (!(expr)) { \
>> + dev_err(&(__dev)->pdev->dev, \
>> + "Assertion failed! %s, %s, %s, line %d\n", \
>> + #expr, __FILE__, __func__, __LINE__); \
>> + } \
>> +} while (0)
>
> as a rule of thumb, don't add functions/macros that you don't use in
> given patch
>
> have you seen WARN_ON() family?
>
Thank you, will use WARN_ON instead of assert.
>> +
>> +enum {
>> + XSC_MAX_NAME_LEN = 32,
>> +};
>> +
>> +struct xsc_dev_resource {
>> + struct mutex alloc_mutex; /* protect buffer alocation
>> according to numa node */
>> +};
>> +
>> +enum xsc_pci_state {
>> + XSC_PCI_STATE_DISABLED,
>> + XSC_PCI_STATE_ENABLED,
>> +};
>> +
>> +struct xsc_priv {
>> + char name[XSC_MAX_NAME_LEN];
>> + struct list_head dev_list;
>> + struct list_head ctx_list;
>> + spinlock_t ctx_lock; /* protect ctx_list */
>> + int numa_node;
>> +};
>> +
>> +struct xsc_core_device {
>> + struct pci_dev *pdev;
>> + struct device *device;
>> + struct xsc_priv priv;
>> + struct xsc_dev_resource *dev_res;
>> +
>> + void __iomem *bar;
>> + int bar_num;
>> +
>> + struct mutex pci_state_mutex; /* protect pci_state */
>> + enum xsc_pci_state pci_state;
>> + struct mutex intf_state_mutex; /* protect intf_state */
>> + unsigned long intf_state;
>> +};
>> +
>> +#endif
>> diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
>> b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
>> new file mode 100644
>> index 000000000..0d9a4ff8a
>> --- /dev/null
>> +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig
>> @@ -0,0 +1,16 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> +# All rights reserved.
>> +# Yunsilicon driver configuration
>> +#
>> +
>> +config YUNSILICON_XSC_ETH
>> + tristate "Yunsilicon XSC ethernet driver"
>> + default n
>> + depends on YUNSILICON_XSC_PCI
>> + help
>> + This driver provides ethernet support for
>> + Yunsilicon XSC devices.
>> +
>> + To compile this driver as a module, choose M here. The module
>> + will be called xsc_eth.
>> diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
>> b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
>> new file mode 100644
>> index 000000000..2811433af
>> --- /dev/null
>> +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile
>> @@ -0,0 +1,9 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> +# All rights reserved.
>> +
>> +ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
>> +
>> +obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o
>> +
>> +xsc_eth-y := main.o
>> \ No newline at end of file
>> diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
>> b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
>> new file mode 100644
>> index 000000000..2b6d79905
>> --- /dev/null
>> +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig
>> @@ -0,0 +1,16 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> +# All rights reserved.
>> +# Yunsilicon PCI configuration
>> +#
>> +
>> +config YUNSILICON_XSC_PCI
>> + tristate "Yunsilicon XSC PCI driver"
>> + default n
>> + select PAGE_POOL
>> + help
>> + This driver is common for Yunsilicon XSC
>> + ethernet and RDMA drivers.
>> +
>> + To compile this driver as a module, choose M here. The module
>> + will be called xsc_pci.
>> diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
>> b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
>> new file mode 100644
>> index 000000000..709270df8
>> --- /dev/null
>> +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile
>> @@ -0,0 +1,9 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> +# All rights reserved.
>> +
>> +ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc
>> +
>> +obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o
>> +
>> +xsc_pci-y := main.o
>> diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
>> b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
>> new file mode 100644
>> index 000000000..cbe0bfbd1
>> --- /dev/null
>> +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c
>> @@ -0,0 +1,272 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd.
>> + * All rights reserved.
>> + */
>> +
>> +#include "common/xsc_core.h"
>> +
>> +unsigned int xsc_log_level = XSC_LOG_LEVEL_WARN;
>> +module_param_named(log_level, xsc_log_level, uint, 0644);
>> +MODULE_PARM_DESC(log_level,
>> + "lowest log level to print: 0=debug, 1=info, 2=warning,
>> 3=error. Default=1");
>> +EXPORT_SYMBOL(xsc_log_level);
>> +
>> +#define XSC_PCI_DRV_DESC "Yunsilicon Xsc PCI driver"
>
> remove the define and just use the string inplace as desription
OK, will modify.
>
>> +
>> +static const struct pci_device_id xsc_pci_id_table[] = {
>> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) },
>> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID_DIAMOND) },
>> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_HOST_PF_DEV_ID) },
>> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_SOC_PF_DEV_ID) },
>> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MS_PF_DEV_ID) },
>> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_HOST_PF_DEV_ID) },
>> + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_SOC_PF_DEV_ID) },
>> + { 0 }
>> +};
>> +
>> +static int set_dma_caps(struct pci_dev *pdev)
>> +{
>> + int err;
>> +
>> + err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
>> + if (err)
>> + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
>> + else
>> + err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64));
>> +
>> + if (!err)
>> + dma_set_max_seg_size(&pdev->dev, SZ_2G);
>> +
>> + return err;
>> +}
>> +
>> +static int xsc_pci_enable_device(struct xsc_core_device *xdev)
>> +{
>> + struct pci_dev *pdev = xdev->pdev;
>> + int err = 0;
>> +
>> + mutex_lock(&xdev->pci_state_mutex);
>> + if (xdev->pci_state == XSC_PCI_STATE_DISABLED) {
>> + err = pci_enable_device(pdev);
>> + if (!err)
>> + xdev->pci_state = XSC_PCI_STATE_ENABLED;
>> + }
>> + mutex_unlock(&xdev->pci_state_mutex);
>> +
>> + return err;
>> +}
>> +
>> +static void xsc_pci_disable_device(struct xsc_core_device *xdev)
>> +{
>> + struct pci_dev *pdev = xdev->pdev;
>> +
>> + mutex_lock(&xdev->pci_state_mutex);
>> + if (xdev->pci_state == XSC_PCI_STATE_ENABLED) {
>> + pci_disable_device(pdev);
>> + xdev->pci_state = XSC_PCI_STATE_DISABLED;
>> + }
>> + mutex_unlock(&xdev->pci_state_mutex);
>> +}
>> +
>> +static int xsc_pci_init(struct xsc_core_device *xdev, const struct
>> pci_device_id *id)
>> +{
>> + struct pci_dev *pdev = xdev->pdev;
>> + void __iomem *bar_base;
>> + int bar_num = 0;
>> + int err;
>> +
>> + mutex_init(&xdev->pci_state_mutex);
>> + xdev->priv.numa_node = dev_to_node(&pdev->dev);
>> +
>> + err = xsc_pci_enable_device(xdev);
>> + if (err) {
>> + xsc_core_err(xdev, "failed to enable PCI device: err=%d\n",
>> err);
>> + goto err_ret;
>> + }
>> +
>> + err = pci_request_region(pdev, bar_num, KBUILD_MODNAME);
>> + if (err) {
>> + xsc_core_err(xdev, "failed to request %s pci_region=%d:
>> err=%d\n",
>> + KBUILD_MODNAME, bar_num, err);
>> + goto err_disable;
>> + }
>> +
>> + pci_set_master(pdev);
>> +
>> + err = set_dma_caps(pdev);
>> + if (err) {
>> + xsc_core_err(xdev, "failed to set DMA capabilities mask:
>> err=%d\n", err);
>> + goto err_clr_master;
>> + }
>> +
>> + bar_base = pci_ioremap_bar(pdev, bar_num);
>> + if (!bar_base) {
>> + xsc_core_err(xdev, "failed to ioremap %s bar%d\n",
>> KBUILD_MODNAME, bar_num);
>> + goto err_clr_master;
>> + }
>> +
>> + err = pci_save_state(pdev);
>> + if (err) {
>> + xsc_core_err(xdev, "pci_save_state failed: err=%d\n", err);
>> + goto err_io_unmap;
>> + }
>> +
>> + xdev->bar_num = bar_num;
>> + xdev->bar = bar_base;
>> +
>> + return 0;
>> +
>> +err_io_unmap:
>> + pci_iounmap(pdev, bar_base);
>> +err_clr_master:
>> + pci_clear_master(pdev);
>> + pci_release_region(pdev, bar_num);
>> +err_disable:
>> + xsc_pci_disable_device(xdev);
>> +err_ret:
>> + return err;
>> +}
>> +
>> +static void xsc_pci_fini(struct xsc_core_device *xdev)
>> +{
>> + struct pci_dev *pdev = xdev->pdev;
>> +
>> + if (xdev->bar)
>> + pci_iounmap(pdev, xdev->bar);
>> + pci_clear_master(pdev);
>> + pci_release_region(pdev, xdev->bar_num);
>> + xsc_pci_disable_device(xdev);
>> +}
>> +
>> +static int xsc_priv_init(struct xsc_core_device *xdev)
>> +{
>> + struct xsc_priv *priv = &xdev->priv;
>> +
>> + strscpy(priv->name, dev_name(&xdev->pdev->dev), XSC_MAX_NAME_LEN);
>> +
>> + INIT_LIST_HEAD(&priv->ctx_list);
>> + spin_lock_init(&priv->ctx_lock);
>> + mutex_init(&xdev->intf_state_mutex);
>> +
>> + return 0;
>> +}
>> +
>> +static int xsc_dev_res_init(struct xsc_core_device *xdev)
>> +{
>> + struct xsc_dev_resource *dev_res;
>> +
>> + dev_res = kvzalloc(sizeof(*dev_res), GFP_KERNEL);
>> + if (!dev_res)
>> + return -ENOMEM;
>> +
>> + xdev->dev_res = dev_res;
>> + mutex_init(&dev_res->alloc_mutex);
>> +
>> + return 0;
>> +}
>> +
>> +static void xsc_dev_res_cleanup(struct xsc_core_device *xdev)
>> +{
>> + kfree(xdev->dev_res);
>> +}
>> +
>> +static int xsc_core_dev_init(struct xsc_core_device *xdev)
>> +{
>> + int err;
>> +
>> + xsc_priv_init(xdev);
>> +
>> + err = xsc_dev_res_init(xdev);
>> + if (err) {
>> + xsc_core_err(xdev, "xsc dev res init failed %d\n", err);
>> + goto out;
>
> return err...
Thanks for the feedback. The |return err;| is retained to accommodate
additional error handling logic in subsequent patches. After those
patches are added, this structure will look OK.
>
>> + }
>> +
>> + return 0;
>> +out:
>> + return err;
>
> ...so you could remove last two lines
>
>> +}
>> +
>> +static void xsc_core_dev_cleanup(struct xsc_core_device *xdev)
>> +{
>> + xsc_dev_res_cleanup(xdev);
>> +}
>> +
>> +static int xsc_pci_probe(struct pci_dev *pci_dev,
>> + const struct pci_device_id *id)
>> +{
>> + struct xsc_core_device *xdev;
>> + int err;
>> +
>> + xdev = kzalloc(sizeof(*xdev), GFP_KERNEL);
>> + if (!xdev)
>> + return -ENOMEM;
>> +
>> + xdev->pdev = pci_dev;
>> + xdev->device = &pci_dev->dev;
>> +
>> + pci_set_drvdata(pci_dev, xdev);
>> + err = xsc_pci_init(xdev, id);
>> + if (err) {
>> + xsc_core_err(xdev, "xsc_pci_init failed %d\n", err);
>> + goto err_unset_pci_drvdata;
>> + }
>> +
>> + err = xsc_core_dev_init(xdev);
>> + if (err) {
>> + xsc_core_err(xdev, "xsc_core_dev_init failed %d\n", err);
>> + goto err_pci_fini;
>> + }
>> +
>> + return 0;
>> +err_pci_fini:
>> + xsc_pci_fini(xdev);
>> +err_unset_pci_drvdata:
>> + pci_set_drvdata(pci_dev, NULL);
>> + kfree(xdev);
>> +
>> + return err;
>> +}
>> +
>> +static void xsc_pci_remove(struct pci_dev *pci_dev)
>> +{
>> + struct xsc_core_device *xdev = pci_get_drvdata(pci_dev);
>> +
>> + xsc_core_dev_cleanup(xdev);
>> + xsc_pci_fini(xdev);
>> + pci_set_drvdata(pci_dev, NULL);
>> + kfree(xdev);
>> +}
>> +
>> +static struct pci_driver xsc_pci_driver = {
>> + .name = "xsc-pci",
>> + .id_table = xsc_pci_id_table,
>> + .probe = xsc_pci_probe,
>> + .remove = xsc_pci_remove,
>> +};
>> +
>> +static int __init xsc_init(void)
>> +{
>> + int err;
>> +
>> + err = pci_register_driver(&xsc_pci_driver);
>> + if (err) {
>> + pr_err("failed to register pci driver\n");
>> + goto out;
>
> ditto plain return
>
>> + }
>> + return 0;
>> +
>> +out:
>> + return err;
>> +}
>> +
>> +static void __exit xsc_fini(void)
>> +{
>> + pci_unregister_driver(&xsc_pci_driver);
>> +}
>> +
>> +module_init(xsc_init);
>> +module_exit(xsc_fini);
>> +
>> +MODULE_LICENSE("GPL");
>> +MODULE_DESCRIPTION(XSC_PCI_DRV_DESC);
>
Thank you,Przemek,for your thoughtful review and patient explanations.
Your clarification of some important rules is really valuable for
someone like me who is new to this. I will make sure to follow these
guidelines in my future changes. I also appreciate all the feedback
you’ve provided earlier.
Best regards,
Xin Tian
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework
2024-12-23 3:20 ` tianx
@ 2024-12-24 2:27 ` Andrew Lunn
0 siblings, 0 replies; 33+ messages in thread
From: Andrew Lunn @ 2024-12-24 2:27 UTC (permalink / raw)
To: tianx
Cc: netdev, andrew+netdev, kuba, pabeni, edumazet, davem,
jeff.johnson, przemyslaw.kitszel, weihg, wanry
On Mon, Dec 23, 2024 at 11:20:20AM +0800, tianx wrote:
> On 2024/12/19 2:20, Andrew Lunn wrote:
> >> +enum {
> >> + XSC_LOG_LEVEL_DBG = 0,
> >> + XSC_LOG_LEVEL_INFO = 1,
> >> + XSC_LOG_LEVEL_WARN = 2,
> >> + XSC_LOG_LEVEL_ERR = 3,
> >> +};
> >> +
> >> +#define xsc_dev_log(condition, level, dev, fmt, ...) \
> >> +do { \
> >> + if (condition) \
> >> + dev_printk(level, dev, dev_fmt(fmt), ##__VA_ARGS__); \
> >> +} while (0)
> >> +
> >> +#define xsc_core_dbg(__dev, format, ...) \
> >> + xsc_dev_log(xsc_log_level <= XSC_LOG_LEVEL_DBG, KERN_DEBUG, \
> >> + &(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> >> + __func__, __LINE__, current->pid, ##__VA_ARGS__)
> >> +
> >> +#define xsc_core_dbg_once(__dev, format, ...) \
> >> + dev_dbg_once(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \
> >> + __func__, __LINE__, current->pid, \
> >> + ##__VA_ARGS__)
> >> +
> >> +#define xsc_core_dbg_mask(__dev, mask, format, ...) \
> >> +do { \
> >> + if ((mask) & xsc_debug_mask) \
> >> + xsc_core_dbg(__dev, format, ##__VA_ARGS__); \
> >> +} while (0)
> > You where asked to throw all these away and just use the existing
> > methods.
> >
> > If you disagree with a comment, please reply and ask for more details,
> > understand the reason behind the comment, or maybe try to justify your
> > solution over what already exists.
> >
> > Maybe look at the ethtool .get_msglevel & .set_msglevel if you are not
> > already using them.
>
> Apologies for the delayed reply. Thank you for the feedback.
>
> Our driver suite consists of three modules: xsc_pci (which manages
> hardware resources and provides common services for the other two
> modules), xsc_eth (providing Ethernet functionality), and xsc_ib
> (offering RDMA functionality). The patch set we are submitting currently
> includes xsc_pci and xsc_eth.
>
> To ensure consistent and fine-grained log control for all modules, we
> have wrapped these logging functions for ease of use. The use of these
> interfaces is strictly limited to our driver and does not impact other
> parts of the kernel. I believe this can be considered a small feature
> within our code. I’ve also observed similar implementations in other
> drivers, such as in |drivers/net/ethernet/chelsio/common.h| and
> |drivers/net/ethernet/adaptec/starfire.c|.
Did you look at the age of these drivers? starfire has been around
from before git was adopted, 2005. Chelsio is of a similar age. You
cannot expect anything so old to be a good reference of today's best
practices.
Please just use netdev_dbg(), netdev_info(), netdev_warn(),
netdev_err() etc for the ethernet driver, combined with msglevel. It
is an ethernet driver, so the usual ethernet driver APIs should be
used.
For the PCI driver, pci_dbg(), pci_info(), pci_notices() etc.
I don't know rdma, is there rdma_dbg(), rdma_info() etc.
Andrew
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v1 02/16] net-next/yunsilicon: Enable CMDQ
2024-12-18 14:46 ` Przemek Kitszel
@ 2024-12-27 3:52 ` tianx
0 siblings, 0 replies; 33+ messages in thread
From: tianx @ 2024-12-27 3:52 UTC (permalink / raw)
To: Przemek Kitszel
Cc: andrew+netdev, kuba, netdev, pabeni, edumazet, davem,
jeff.johnson, weihg, wanry
>> + * complete and we may need to increase this timeout value
>> + */
>> + XSC_CMD_TIMEOUT_MSEC = 10 * 1000,
>> + XSC_CMD_WQ_MAX_NAME = 32,
>
> take a look at the abundant kernel provided work queues, not need to
> spawn your own most of the time
>
>
Hi, Przemek
Thank you for the detailed review. The previous issues will be addressed
in v2, but this one needs clarification.
We need a workqueue that executes tasks sequentially, but the workqueues
provided by the kernel do not meet this requirement, so we used
|create_singlethread_workqueue| to create one.
Best regards,
Xin
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2024-12-27 3:52 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-18 10:51 [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Xin Tian
2024-12-18 10:50 ` [PATCH v1 01/16] net-next/yunsilicon: Add xsc driver basic framework Xin Tian
2024-12-18 13:58 ` Przemek Kitszel
2024-12-23 3:43 ` tianx
2024-12-18 18:20 ` Andrew Lunn
2024-12-23 3:20 ` tianx
2024-12-24 2:27 ` Andrew Lunn
2024-12-18 10:50 ` [PATCH v1 02/16] net-next/yunsilicon: Enable CMDQ Xin Tian
2024-12-18 14:46 ` Przemek Kitszel
2024-12-27 3:52 ` tianx
2024-12-18 10:50 ` [PATCH v1 03/16] net-next/yunsilicon: Add hardware setup APIs Xin Tian
2024-12-18 10:50 ` [PATCH v1 04/16] net-next/yunsilicon: Add qp and cq management Xin Tian
2024-12-18 10:50 ` [PATCH v1 05/16] net-next/yunsilicon: Add eq and alloc Xin Tian
2024-12-18 10:50 ` [PATCH v1 06/16] net-next/yunsilicon: Add pci irq Xin Tian
2024-12-18 10:50 ` [PATCH v1 07/16] net-next/yunsilicon: Device and interface management Xin Tian
2024-12-18 10:50 ` [PATCH v1 08/16] net-next/yunsilicon: Add ethernet interface Xin Tian
2024-12-18 10:50 ` [PATCH v1 09/16] net-next/yunsilicon: Init net device Xin Tian
2024-12-18 18:28 ` Andrew Lunn
2024-12-20 10:43 ` tianx
2024-12-18 10:50 ` [PATCH v1 10/16] net-next/yunsilicon: Add eth needed qp and cq apis Xin Tian
2024-12-18 10:50 ` [PATCH v1 11/16] net-next/yunsilicon: ndo_open and ndo_stop Xin Tian
2024-12-18 10:50 ` [PATCH v1 12/16] net-next/yunsilicon: Add ndo_start_xmit Xin Tian
2024-12-18 10:50 ` [PATCH v1 13/16] net-next/yunsilicon: Add eth rx Xin Tian
2024-12-18 19:47 ` Andrew Lunn
2024-12-20 7:31 ` tianx
2024-12-18 10:50 ` [PATCH v1 14/16] net-next/yunsilicon: add ndo_get_stats64 Xin Tian
2024-12-18 10:50 ` [PATCH v1 15/16] net-next/yunsilicon: Add ndo_set_mac_address Xin Tian
2024-12-18 10:50 ` [PATCH v1 16/16] net-next/yunsilicon: Add change mtu Xin Tian
2024-12-18 18:31 ` Andrew Lunn
2024-12-20 7:07 ` tianx
2024-12-19 0:35 ` [PATCH v1 00/16] net-next/yunsilicon: ADD Yunsilicon XSC Ethernet Driver Jakub Kicinski
2024-12-20 6:40 ` tianx
2024-12-20 9:21 ` Andrew Lunn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).